Want to pass your Databricks Certified Machine Learning Professional DATABRICKS-MACHINE-LEARNING-PROFESSIONAL exam in the very first attempt? Try Pass2lead! It is equally effective for both starters and IT professionals.
VCE
A machine learning engineer is migrating a machine learning pipeline to use Databricks Machine Learning. They have programmatically identified the best run from an MLflow Experiment and stored its URI in the model_uri variable and its Run ID in the run_id variable. They have also determined that the model was logged with the name "model". Now, the machine learning engineer wants to register that model in the MLflow Model Registry with the name "best_model". Which of the following lines of code can they use to register the model to the MLflow Model Registry?
A. mlflow.register_model(model_uri, "best_model")
B. mlflow.register_model(run_id, "best_model")
C. mlflow.register_model(f"runs:/{run_id}/best_model", "model")
D. mlflow.register_model(model_uri, "model")
E. mlflow.register_model(f"runs:/{run_id}/model")
A data scientist is using MLflow to track their machine learning experiment. As a part of each MLflow run, they are performing hyperparameter tuning. The data scientist would like to have one parent run for the tuning process with a child run for each unique combination of hyperparameter values.
They are using the following code block:
The code block is not nesting the runs in MLflow as they expected.
Which of the following changes does the data scientist need to make to the above code block so that it successfully nests the child runs under the parent run in MLflow?
A. Indent the child run blocks within the parent run block
B. Add the nested=True argument to the parent run
C. Remove the nested=True argument from the child runs
D. Provide the same name to the run_name parameter for all three run blocks
E. Add the nested=True argument to the parent run and remove the nested=True arguments from the child runs
A machine learning engineer is using the following code block as part of a batch deployment pipeline:
Which of the following changes needs to be made so this code block will work when the inference table is a stream source?
A. Replace "inference" with the path to the location of the Delta table
B. Replace schema(schema) with option("maxFilesPerTrigger", 1)
C. Replace spark.read with spark.readStream
D. Replace format("delta") with format("stream")
E. Replace predict with a stream-friendly prediction function