You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The BaseParameters class to define step parameters is deprecated. Check out our docs https://docs.zenml.io/how-to/use-configuration-files/how-to-use-config for information on how to parameterize your steps. As a quick fix to get rid of this warning, make sure your parameter class inherits from pydantic.BaseModel instead of the BaseParameters class.
The BaseParameters class to define step parameters is deprecated. Check out our docs https://docs.zenml.io/how-to/use-configuration-files/how-to-use-config for information on how to parameterize your steps. As a quick fix to get rid of this warning, make sure your parameter class inherits from pydantic.BaseModel instead of the BaseParameters class.
Reusing registered pipeline version: (version: 2).
Executing a new run.
Caching is disabled by default for continuous_deployement_pipeline.
Using user: default
Using stack: mlflow_stack
model_deployer: mlflow
orchestrator: default
experiment_tracker: mlflow_tracker
artifact_store: default
You can visualize your pipeline runs in the ZenML Dashboard. In order to try it locally, please run zenml up.
Step ingest_df has started.
Investing Data fromE:\Codes\MLOPS\CDE\data\olist_customers_dataset.csv
Step ingest_df has finished in 8.268s.
Step ingest_df completed successfully.
Step clean_df has started.
E:\Codes\MLOPS\CDE\src\data_cleaning.py:37: FutureWarning: A value is trying to be set on a copy of a DataFrame or Series through chained assignment using an inplace method.
The behavior will change in pandas 3.0. This inplace method will never work because the intermediate object on which we are setting values always behaves as a copy.
For example, when doing 'df[col].method(value, inplace=True)', try using 'df.method({col: value}, inplace=True)' or df[col] = df[col].method(value) instead, to perform the operation inplace on the original object.
data["product_weight_g"].fillna(
E:\Codes\MLOPS\CDE\src\data_cleaning.py:39: FutureWarning: A value is trying to be set on a copy of a DataFrame or Series through chained assignment using an inplace method.
The behavior will change in pandas 3.0. This inplace method will never work because the intermediate object on which we are setting values always behaves as a copy.
For example, when doing 'df[col].method(value, inplace=True)', try using 'df.method({col: value}, inplace=True)' or df[col] = df[col].method(value) instead, to perform the operation inplace on the original object.
data["product_length_cm"].fillna(
E:\Codes\MLOPS\CDE\src\data_cleaning.py:41: FutureWarning: A value is trying to be set on a copy of a DataFrame or Series through chained assignment using an inplace method.
The behavior will change in pandas 3.0. This inplace method will never work because the intermediate object on which we are setting values always behaves as a copy.
For example, when doing 'df[col].method(value, inplace=True)', try using 'df.method({col: value}, inplace=True)' or df[col] = df[col].method(value) instead, to perform the operation inplace on the original object.
data["product_height_cm"].fillna(
E:\Codes\MLOPS\CDE\src\data_cleaning.py:43: FutureWarning: A value is trying to be set on a copy of a DataFrame or Series through chained assignment using an inplace method.
The behavior will change in pandas 3.0. This inplace method will never work because the intermediate object on which we are setting values always behaves as a copy.
For example, when doing 'df[col].method(value, inplace=True)', try using 'df.method({col: value}, inplace=True)' or df[col] = df[col].method(value) instead, to perform the operation inplace on the original object.
data["product_width_cm"].fillna(
Data Cleaning Completed
Step clean_df has finished in 3.396s.
Step clean_df completed successfully.
Step train_model has started.
2024/06/19 16:26:45 WARNING mlflow.utils.autologging_utils: You are using an unsupported version of sklearn. If you encounter errors during autologging, try upgrading / downgrading sklearn to a supported version, or try upgrading MLflow.
2024/06/19 16:26:51 WARNING mlflow.utils.autologging_utils: MLflow autologging encountered a warning: "E:\Codes\MLOPS\env_mlops\lib\site-packages\mlflow\types\utils.py:394: UserWarning: Hint: Inferred schema contains integer column(s). Integer columns in Python cannot represent missing values. If your input data contains missing values at inference time, it will be encoded as floats and will cause a schema enforcement error. The best way to avoid this problem is to infer the model schema based on a realistic data sample (training dataset) that includes missing values. Alternatively, you can declare integer columns as doubles (float64) whenever these columns may have missing values. See `Handling Integers With Missing Values <https://www.mlflow.org/docs/latest/models.html#handling-integers-with-missing-values>`_ for more details."
2024/06/19 16:26:51 WARNING mlflow.utils.autologging_utils: MLflow autologging encountered a warning: "E:\Codes\MLOPS\env_mlops\lib\site-packages\mlflow\types\utils.py:394: UserWarning: Hint: Inferred schema contains integer column(s). Integer columns in Python cannot represent missing values. If your input data contains missing values at inference time, it will be encoded as floats and will cause a schema enforcement error. The best way to avoid this problem is to infer the model schema based on a realistic data sample (training dataset) that includes missing values. Alternatively, you can declare integer columns as doubles (float64) whenever these columns may have missing values. See `Handling Integers With Missing Values <https://www.mlflow.org/docs/latest/models.html#handling-integers-with-missing-values>`_ for more details."
2024/06/19 16:27:18 WARNING mlflow.utils.autologging_utils: MLflow autologging encountered a warning: "E:\Codes\MLOPS\env_mlops\lib\site-packages\_distutils_hack\__init__.py:33: UserWarning: Setuptools is replacing distutils."
Model Training Completed
E:\Codes\MLOPS\env_mlops\lib\site-packages\zenml\integrations\mlflow\experiment_trackers\mlflow_experiment_tracker.py:254: FutureWarning: ``mlflow.gluon.autolog`` is deprecated since 2.5.0. This method will be removed in a future release.
module.autolog(disable=True)
Step train_model has finished in 35.067s.
Step train_model completed successfully.
Step evaluion_model has started.
Calculating MSE
MSE: 1.8640770533975461
Calculating R2
R2: 0.017729030402296564
Calculating MSE
E:\Codes\MLOPS\env_mlops\lib\site-packages\sklearn\metrics\_regression.py:492: FutureWarning: 'squared' is deprecated in version 1.4 and will be removed in 1.6. To calculate the root mean squared error, use the function'root_mean_squared_error'.
warnings.warn(
MSE: 1.365312071798073
E:\Codes\MLOPS\env_mlops\lib\site-packages\zenml\integrations\mlflow\experiment_trackers\mlflow_experiment_tracker.py:254: FutureWarning: ``mlflow.gluon.autolog`` is deprecated since 2.5.0. This method will be removed in a future release.
module.autolog(disable=True)
Step evaluion_model has finished in 1.011s.
Step evaluion_model completed successfully.
Step deployment_trigger has started.
Step deployment_trigger has finished in 0.136s.
Step deployment_trigger completed successfully.
Caching disabled explicitly for mlflow_model_deployer_step.
Step mlflow_model_deployer_step has started.
Daemon functionality is currently not supported on Windows.
Daemon functionality is currently not supported on Windows.
Existing model server found for model with the exact same configuration. Returning the existing service named zenml-model.
Daemon functionality is currently not supported on Windows.
MLflow deployment service started and reachable at:
None
Stopping existing services...
Calling stop method...
Daemon functionality is currently not supported on Windows.
Daemon functionality is currently not supported on Windows.
Daemon functionality is currently not supported on Windows.
stop method executed successfully.
Daemon functionality is currently not supported on Windows.
Step mlflow_model_deployer_step has finished in 1.523s.
Step mlflow_model_deployer_step completed successfully.
Pipeline run has finished in 50.718s.
You can run:
mlflow ui --backend-store-uri 'file:C:\Users\jaysi\AppData\Roaming\zenml\local_stores\c0011160-7514-4297-b481-43839e52e0c0\mlruns
...to inspect your experiment runs within the MLflow UI.
You can find your runs tracked within the `mlflow_example_pipeline` experiment. There you'll also be able to compare two or more runs.
Daemon functionality is currently not supported on Windows.
Daemon functionality is currently not supported on Windows.
Daemon functionality is currently not supported on Windows.
Code
import numpy as np
import json
import pandas as pd
from zenml import pipeline, step
from zenml.config import DockerSettings
from zenml.constants import DEFAULT_SERVICE_START_STOP_TIMEOUT
from zenml.integrations.constants import MLFLOW
from zenml.integrations.mlflow.model_deployers.mlflow_model_deployer import (
MLFlowModelDeployer,
)
from zenml.integrations.mlflow.services import MLFlowDeploymentService
from zenml.integrations.mlflow.steps import mlflow_model_deployer_step
from zenml.steps import BaseParameters, Output
from steps.clean_Data import clean_df
from steps.evalution import evaluion_model
from steps.ingest_Data import ingest_df
from steps.model_train import train_model
from .utils import get_data_for_test
docker_settings = DockerSettings(required_integrations=[MLFLOW])
class DeploymentTriggerConfig(BaseParameters):
"""Deployment Trigger Config"""
min_accuracy: float = 0
@step(enable_cache=False)
def dynamic_importer() -> str:
"""Downloads the latest data from a mock API."""
data = get_data_for_test()
return data
@step
def deployment_trigger(
accuracy: float,
config: DeploymentTriggerConfig
):
"""Check If Model Accuracy Is Good Enough To Deploy Or Not"""
return accuracy > config.min_accuracy
class MLFlowDeploymentLoaderStepParameters(BaseParameters):
"""MLflow deployment getter parameters
Attributes:
pipeline_name: name of the pipeline that deployed the MLflow prediction
server
step_name: the name of the step that deployed the MLflow prediction
server
running: when this flag is set, the step only returns a running service
model_name: the name of the model that is deployed
"""
pipeline_name: str
step_name: str
running: bool = True
@step(enable_cache=False)
def prediction_service_loader(
pipeline_name: str,
pipeline_step_name: str,
running: bool = True,
model_name: str = "model"
) -> MLFlowDeploymentService:
"""Get the prediction service started by the deployment pipeline.
Args:
pipeline_name: name of the pipeline that deployed the MLflow prediction
server
step_name: the name of the step that deployed the MLflow prediction
server
running: when this flag is set, the step only returns a running service
model_name: the name of the model that is deployed
"""
# GEt Empl Folw Depflyer Stack Component
model_deployer = MLFlowModelDeployer.get_active_model_deployer()
# fetch existing services with same pipeline name, step name and model name
existing_services = model_deployer.find_model_server(
pipeline_name=pipeline_name,
pipeline_step_name=pipeline_step_name,
model_name=model_name,
running=running,
)
if not existing_services:
raise RuntimeError(
f"No MLflow prediction service deployed by the "
f"{pipeline_step_name} step in the {pipeline_name} "
f"pipeline for the '{model_name}' model is currently "
f"running."
)
return existing_services[0]
@step
def predictor(
service: MLFlowDeploymentService,
data: str,
) -> np.ndarray:
"""Run an inference request against a prediction service"""
service.start(timeout=10) # should be a NOP if already started
data = json.loads(data)
data.pop("columns")
data.pop("index")
columns_for_df = [
"payment_sequential",
"payment_installments",
"payment_value",
"price",
"freight_value",
"product_name_lenght",
"product_description_lenght",
"product_photos_qty",
"product_weight_g",
"product_length_cm",
"product_height_cm",
"product_width_cm",
]
df = pd.DataFrame(data["data"], columns=columns_for_df)
json_list = json.loads(json.dumps(list(df.T.to_dict().values())))
data = np.array(json_list)
prediction = service.predict(data)
return prediction
@pipeline(enable_cache=False, settings={"docker": docker_settings})
def continuous_deployement_pipeline(data_path: str, min_accuracy: float = 0,
workers: int = 1,
timeout: int = DEFAULT_SERVICE_START_STOP_TIMEOUT):
df = ingest_df(data_path=data_path)
X_train, X_test, y_train, y_test = clean_df(df)
model = train_model(X_train, X_test, y_train, y_test)
r2, rmse, mse = evaluion_model(model, X_test, y_test)
deployment_decision = deployment_trigger(accuracy=r2)
mlflow_model_deployer_step(
model=model,
deploy_decision=deployment_decision,
workers=workers,
timeout=timeout,
)
@pipeline(enable_cache=False, settings={"docker": docker_settings})
def inference_pipeline(pipeline_name: str, pipeline_step_name: str):
# Link all the steps artifacts together
data = dynamic_importer()
service = prediction_service_loader(
pipeline_name=pipeline_name,
pipeline_step_name=pipeline_step_name,
running=False,
)
prediction = predictor(service=service, data=data)
return prediction
The text was updated successfully, but these errors were encountered:
Got This Error While Running
python run_deployement.py --config deploy
Log
Code
The text was updated successfully, but these errors were encountered: