Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warnings after unit tests for dmod.core package #694

Open
robertbartel opened this issue Aug 1, 2024 · 7 comments
Open

Warnings after unit tests for dmod.core package #694

robertbartel opened this issue Aug 1, 2024 · 7 comments
Assignees
Labels
bug Something isn't working maas MaaS Workstream

Comments

@robertbartel
Copy link
Contributor

While the unit tests for dmod.core pass, there are some warnings that appear after test completion. Beside the general undesirableness, the warnings are throwing off how ./scripts/run_tests.sh indicates whether dmod.core tests passed.

Current behavior

If running tests using ./scripts/test_package.sh, you will get one of the following (note that the exact number of leaked semaphores will vary per attempt):

rbartel@friday dmod % ./scripts/test_package.sh python/lib/core
Detected default virtual env directory: /Users/rbartel/Developer/noaa/dmod/venv
Activating virtual environment from /Users/rbartel/Developer/noaa/dmod/venv
===========================================================================
....................................................................................
----------------------------------------------------------------------
Ran 84 tests in 4.446s

OK
===========================================================================
Deactiving active virtual env at /Users/rbartel/Developer/noaa/dmod/venv

rbartel@friday dmod % /opt/local/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 7 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

rbartel@friday dmod %

OR

rbartel@friday dmod % ./scripts/test_package.sh python/lib/core
Detected default virtual env directory: /Users/rbartel/Developer/noaa/dmod/venv
Activating virtual environment from /Users/rbartel/Developer/noaa/dmod/venv
===========================================================================
....................................................................................
----------------------------------------------------------------------
Ran 84 tests in 4.571s

OK
===========================================================================
Deactiving active virtual env at /Users/rbartel/Developer/noaa/dmod/venv

rbartel@friday dmod % /opt/local/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 9 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '
/opt/local/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/resource_tracker.py:267: UserWarning: resource_tracker: '/mp-u4dy24ud': [Errno 2] No such file or directory
  warnings.warn('resource_tracker: %r: %s' % (name, e))

Expected behavior

Tests should simply pass without the warnings.

Steps to replicate behavior (include URLs)

  1. Run ./scripts/update_package.sh to ensure the latest package versions are installed.
  2. Run ./scripts/test_package.sh python/lib/core one or more times.
@robertbartel robertbartel added bug Something isn't working maas MaaS Workstream labels Aug 1, 2024
@robertbartel
Copy link
Contributor Author

@christophertubbs, I'm guessing this has to do with the recent shared communicator library changes and that you'll be most likely to quickly know what's going on.

@christophertubbs
Copy link
Contributor

The semaphore thing is a known issue in python itself. I don't remember which version that it goes away in.

@aaraney
Copy link
Member

aaraney commented Aug 1, 2024

# test.py
import multiprocessing
import unittest
rlock = multiprocessing.RLock()

def foo():
        import time
        time.sleep(1)

class Test(unittest.TestCase):
    def test_it(self):
        proc = multiprocessing.Process(target=foo)
        proc.start()
        proc.join()
python -m unittest test.py

The issue is here. I think I glossed over this and thought it had been addressed in #652. See comment.

This is not an issue if you just fork. But, since on macOS the default mp context is spawn this causes a resource leak, likewise you get the same thing if it is forkserver.

@robertbartel
Copy link
Contributor Author

Given that it is simply an annoyance, is MacOS-only, and seems to be a subtle, difficult item to address, I'm moving this out of our immediate TODOs. We should fix it eventually if we can, but we can live with it for now.

@aaraney
Copy link
Member

aaraney commented Aug 1, 2024

It's not macOS only, just the default mp context on macOS is spawn. On linux it is fork, but this will change in the future.

@aaraney
Copy link
Member

aaraney commented Aug 1, 2024

Just for history sake, the reason this is an issue is b.c. an multiprocessing.RLock() is created by each process (spawn = fork() -> exec()). multiprocessing.RLock() are backed by named semaphores. If fork were to be used, the same named semaphore would be used by the parent and all child processes.

spawn creates a separate resource_tracker process that tracks resources including semaphores. The resource_tracker is tracking the RLocks created in each child process and i'm not sure if the underlying named semaphore is not being cleaned up properly or if since that work is likely run as a finalizer, the resource_tracker is not being updated to reflect that the resource was relinquished.

@aaraney aaraney self-assigned this Aug 9, 2024
@christophertubbs
Copy link
Contributor

This issue might be solved by the stream conversion ticket. I believe I removed the rlock.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working maas MaaS Workstream
Projects
None yet
Development

No branches or pull requests

3 participants