You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The log settings defined by logging_on(), and therefore by any trollflow2 process, are not inherited by tasks scheduled using dask.distributed when called inside an if __name__ == "__main__" block. This is what happens when using trollflow2 normally. As a consequence, only messages of level warning or above are displayed and without the configured log definition.
The recommended solution would be to add something like
client.run(logging_on)
after creating the client, but that doesn't work, because logging_on is a context manager or actually a generator, leading to dask distributed failures with TypeError: cannot pickle 'generator' object. Calling satpys debug_on instead seems to work, so it would seem this needs a functional wrapper / version of logging_on.
Describe the bug
The log settings defined by
logging_on()
, and therefore by any trollflow2 process, are not inherited by tasks scheduled using dask.distributed when called inside anif __name__ == "__main__"
block. This is what happens when using trollflow2 normally. As a consequence, only messages of level warning or above are displayed and without the configured log definition.To Reproduce
Expected behavior
I expect all log messages to be shown, as they are when using the default scheduler.
Actual results
Full output (stderr + stdout):
For reference, when leaving out the
Client()
context manager, the output is as expected:When I call
logging_on()
outside theif __name__ == "__main__"
, I get the log messages (and others):Environment Info:
Additional context
From this stackoverflow question:
so to solve this,
logging_on()
should be called afterClient()
.The text was updated successfully, but these errors were encountered: