You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, can your team please consider integrating Meridian with MLFlow? We use MLFlow for tracking ML model runs, hyperparameters, model scoring, and model versioning on the Databricks platform. In the case of MMM it would be useful to automatically log what controls/media were fed into the model, the priors used, the model fit (rhat, trace & density plots, r-squared), and ultimately decide which model should be the "production" version of the MMM for the month. Having this type of logging out of the box would reduce errors and allow us to scale MMM to many different markets. It would also be super useful in the experimentation phase of a new MMM where you have to try out many different priors to find a model that converges but also meets the business needs.
Here is some documentation of a competitor library having ML Flow integration:
Hey, yes this would be a great enhancement, we are working on automation and Databricks infra, so as you mention the integration with MLFlow is mandatory to monitor, versioning, select the right parameters. As we intend to scale the model to 70 different scopes this would reduce by a number of months the workload of fine tune all models.
Thank you both for elaborating on your use cases and your need for ML Flow integration. We will take your feedback into consideration and prioritize it for future releases. You can track the previous changes to Meridian and unreleased feature updates in our Change Log.
Feel free to reach out for any further queries or suggestions regarding Meridian!
Feature Request
Hello, can your team please consider integrating Meridian with MLFlow? We use MLFlow for tracking ML model runs, hyperparameters, model scoring, and model versioning on the Databricks platform. In the case of MMM it would be useful to automatically log what controls/media were fed into the model, the priors used, the model fit (rhat, trace & density plots, r-squared), and ultimately decide which model should be the "production" version of the MMM for the month. Having this type of logging out of the box would reduce errors and allow us to scale MMM to many different markets. It would also be super useful in the experimentation phase of a new MMM where you have to try out many different priors to find a model that converges but also meets the business needs.
Here is some documentation of a competitor library having ML Flow integration:
https://www.pymc-marketing.io/en/latest/api/generated/pymc_marketing.mlflow.html
https://mlflow.org/
Thank you!
The text was updated successfully, but these errors were encountered: