-
Notifications
You must be signed in to change notification settings - Fork 437
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: service daemon is not running #2369
Comments
@Ga0512 Try setting the env variable Also can you tell us what python verison, zenml version, mlflow version, and OS you're on? Also some code to replicate would be nice. Thanks! |
Hey! Python 3.11.3 The file I run is this, in this case, I run python run_deployment.py --config deploy, this inside an environment variable:
The pipelines.deployment module:
|
@Ga0512 Unfortunately MLflow deployment isn't supported yet for windows |
I have the same issue but I am running this on a Mac OS. Is there any solution to this? Python version 3.9.18 Package Version catboost 1.0.5 |
Are you doing the Freecodecamp MLOps course? (https://www.youtube.com/watch?v=-dJPoLm_gtE) He used a Mac throughout the course and managed to overcome this problem. |
I am doing that course indeed and tried many times to solve it but I still cannot manage to do it. I might have not understood something he did but I think I did everything he did and yet cannot deploy it. |
Could you try replacing the
Then reinstall the packages ( |
Hello. First of all thank you for replying. 1.1) I did try to install those versions (first by ————— mlflow == 2.10.2 —————
Was appending these two lines of code on the .zshrc file % vim ~/.zshrc
appending those two lines of code: ## for MLOPS deployment
export DISABLE_SPRING=true
export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES % source ~/.zshrc Then creating a new stack, experiment-tracker, model-deployer and setting them. I am still not sure what was the piece that made it work. I have not finished the course (almost done now) but so far it seems to be working, or at least not displaying any errors. Note: I found that stackoverflow post since the zenml logs were giving me a similar error to what one of the users from that post was having This was a copy from that Stack Overflow post:
bjc[81924]: +[__NSPlaceholderDictionary initialize] may have been in progress in another thread when fork() was called.
objc[81924]: +[__NSPlaceholderDictionary initialize] may have been in progress in another thread when fork() was called. Side Note: |
Correct. That's also something we do in our CI to allow things to work on
some Mac environments. I'll add something to our docs to that effect. It
seems like we should make that clear.
…On Fri, 15 Mar 2024 at 06:08, Luismbpr ***@***.***> wrote:
Could you try replacing the requirements.txt file contents with this:
catboost==1.0.4
joblib>=1.1.0
lightgbm==4.1.0
optuna==2.10.0
streamlit==1.29.0
xgboost==2.0.3
markupsafe==1.1.1
zenml>=0.52.0
scikit-learn>=1.3.2
altair
Then reinstall the packages (pip install -r requirements.txt etc in a
fresh env), then zenml disconnect and zenml down and then try zenml up
again?
Hello. First of all thank you for replying.
1.1) I did try to install those versions (first by bash pip install -r
requirements.txt) and did not work.
1.2) Then tried installing one by one and also could not do it. Pip
installer did not let me install those versions
1.
I did the zenml disconnect, zenml down, zenml up many times and never
got it to work.
2.
Tried creating different stacks, experiment-trackers, model-deployers
and set them up to be the ones working. Tried this many times
3.
Something that seemed to work but not entirely sure was using those
two pieces of code on the
https://stackoverflow.com/questions/52671926/rails-may-have-been-in-progress-in-another-thread-when-fork-was-called
Was appending these two lines of code on the .zshrc file
% vim ~/.zshrc
appending those two lines of code:```Markdown ## for MLOPS deploymentexport DISABLE_SPRING=trueexport OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
% source ~/.zshrc
Then creating a new stack, experiment-tracker, model-deployer and setting them.
I am still not sure what was the piece that made it work. I have not finished the course (almost done now) but so far it seems to be working, or at least not displaying any errors.
—
Reply to this email directly, view it on GitHub
<#2369 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAZRNJQXSNYQFA7T3ONDKW3YYJ66JAVCNFSM6AAAAABCNPBI7WVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSOJYHE2DIOBXGU>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Thank you. That would be really helpful. Just a question now that that seemed to be the solution. ## for MLOPS deployment
export DISABLE_SPRING=true
export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES Do we need to use both of these? or which one is the one that works? |
For Macs, I think |
Good to know. Thank you for your help. |
Contact Details [Optional]
[email protected]
System Information
RuntimeError: Failed to start service MLFlowDeploymentService[e55e97f5-1fc7-49ac-9158-5de4e1e1a81d] (type: model-serving, flavor: mlflow)
Administrative state:
active
Operational state:
inactive
Last status message: 'service daemon is not running'
For more information on the service status, please see the following log file:
C:\Users\edney\AppData\Roaming\zenml\local_stores\3e2793a0-8446-4b32-9980-89ace8642081\e55e97f5-1fc7-49ac-9158-5de4e1e1a81d\service.log
What happened?
Hi!
I'm trying to deploy my model using MLFlowDeploymentService from zenml.integrations.mlflow.services but i'm getting this error message:
RuntimeError: Failed to start service MLFlowDeploymentService[e55e97f5-1fc7-49ac-9158-5de4e1e1a81d] (type: model-serving, flavor: mlflow)
Administrative state:
active
Operational state:
inactive
Last status message: 'service daemon is not running'
For more information on the service status, please see the following log file:
C:\Users\edney\AppData\Roaming\zenml\local_stores\3e2793a0-8446-4b32-9980-89ace8642081\e55e97f5-1fc7-49ac-9158-5de4e1e1a81d\service.log
*Nothing in service.log
Relevant log output
RuntimeError: Failed to start service MLFlowDeploymentService[e55e97f5-1fc7-49ac-9158-5de4e1e1a81d] (type: model-serving, flavor: mlflow)
Administrative state:
active
Operational state:
inactive
Last status message: 'service daemon is not running'
For more information on the service status, please see the following log file:
C:\Users\edney\AppData\Roaming\zenml\local_stores\3e2793a0-8446-4b32-9980-89ace8642081\e55e97f5-1fc7-49ac-9158-5de4e1e1a81d\service.log
Code of Conduct
The text was updated successfully, but these errors were encountered: