Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

shuttle::thread_local isn't dropped when threads exit the scheduler #86

Closed
chc4 opened this issue Nov 16, 2022 · 4 comments · Fixed by #88
Closed

shuttle::thread_local isn't dropped when threads exit the scheduler #86

chc4 opened this issue Nov 16, 2022 · 4 comments · Fixed by #88

Comments

@chc4
Copy link

chc4 commented Nov 16, 2022

I have a crate that setups up cross-thread channels via thread_local and lazy_static. Shuttle reports a spurious deadlock, however, because it only frees the thread_local value for threads when they actually exit and not when they "exit" the thread scheduler (such as hitting the end of shuttle::check_random(|| { })) - I am using the thread_local Drop impl causing the last sender of a channel to go away to unblock another thread's receiver, so the other thread can exit once there are no more clients. Because thread_locals don't drop, the sender is never removed, and so the receiver is reported as a deadlock, despite this not being possible in normal operations.

I have a manual workaround, where I just RefCell::take() the channel at the end of my shuttle tests in order to manually drop the sender, but it would be nice to not need this (and it took me more than a little while to figure out what exactly was going on and realize the problem, which may hit other people).

@jamesbornholt
Copy link
Member

Hmm, this is supposed to work — we go to some trouble to run thread-local destructors when a thread finishes, and we have a test for thread-locals dropping at the right time. I'll try to dig into this a bit more. Thanks for the report!

@chc4
Copy link
Author

chc4 commented Nov 18, 2022

    #[test]
    fn shuttle_issue() {
        use shuttle::sync::*;
        use core::cell::Cell;
        let MU: &'static mut Mutex<usize> = Box::leak(Box::new(Mutex::new(0))); // a static_local here instead also works
        shuttle::thread_local! {
            static STASH: Cell<Option<MutexGuard<'_, usize>>> = Cell::new(None);
        }
        shuttle::check_random(|| {
            shuttle::thread::spawn(|| {
                STASH.with(|s| s.set(Some(MU.lock().unwrap())) );
            });
            STASH.with(|s| s.set(Some(MU.lock().unwrap())) );
        }, 100);
    }

is maybe a simpler reproduction of the reported deadlock. The std variant of the same test function doesn't deadlock. Maybe this is out of scope - turns out if you remove the spawn's thread_local then it will actually panic instead!

---- test::test::shuttle_issue stdout ----
thread 'test::test::shuttle_issue' panicked at 'assertion failed: (left != right)
left: Finished,
right: Finished', /home/charlie/.cargo/registry/src/github.com-1ecc6299db9ec823/shuttle-0.4.1/src/runtime/execution.rs:397:17

The Mutex (and channel, in my original example) are both outliving the shuttle scheduler since the static_local and 'static Mutex escape its closure, is maybe the root of the problem... If shuttle provided its own lazy_static that is re-initialized and dropped by the thread scheduler so the concurrency primitives don't escape this would also probably be less of an issue, though maybe that's just a poor man's version of #81

@chc4
Copy link
Author

chc4 commented Nov 18, 2022

Actually, you can probably just make an mpsc channel inside the shuttle::check_random scope and hand the receiver to a scoped thread to block on recv, and put the sender in a thread_local. In which case no concurrency primitives have a 'static lifetime or are escaping the closure...

jamesbornholt added a commit to jamesbornholt/shuttle that referenced this issue Nov 20, 2022
We weren't running the same TLS destructor logic for the main thread as
all other threads. This meant a thread-local on the main thread wouldn't
be dropped until the entire test finished, causing spurious deadlocks or
other confusion.

Fixes awslabs#86.
@jamesbornholt
Copy link
Member

I think #88 will fix this. We weren't running the thread-local destructors for the main thread until the entire test ended. In your example that caused both spurious deadlocks (if the main thread held the lock) and the weird Finished assertion failure (if the main thread tried to drop the lock, which happened after the entire test finished). Thanks for the simple reproducer!

jorajeev pushed a commit that referenced this issue Nov 20, 2022
We weren't running the same TLS destructor logic for the main thread as
all other threads. This meant a thread-local on the main thread wouldn't
be dropped until the entire test finished, causing spurious deadlocks or
other confusion.

Fixes #86.
jorajeev pushed a commit that referenced this issue Feb 29, 2024
We weren't running the same TLS destructor logic for the main thread as
all other threads. This meant a thread-local on the main thread wouldn't
be dropped until the entire test finished, causing spurious deadlocks or
other confusion.

Fixes #86.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants