Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow specifying number of worker threads used to run an enclave #625

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

s-arash
Copy link

@s-arash s-arash commented Jul 12, 2024

This change allows users of this library to specify the number of worker threads. This lets users limit the number of cores used by an enclave.

This is a backwards-compatible change.

arai-fortanix
arai-fortanix previously approved these changes Jul 12, 2024
}

/// Run the enclave with the provided number of worker threads.
/// # Panics
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is it a panic if the number of worker threads is 0 and not an error?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can make it an error. But my thinking is that worker_threads == 0 is a user error/precondition failure that the user must fix and not an error that comes out of running the enclave.

Another option would be to use NonZeroUsize, but it's very rarely used in Rust libraries.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Libraries should try really hard to avoid panics. Panics in libraries should really only be for fatal conditions that the library has no way to recover from, like internal data structure inconsistencies that can't be reconciled somehow. An incorrect number of worker threads is absolutely not fatal in this way. Either the API should make it impossible to pass 0 via NonZeroUsize, or the call should return an error. The caller could decide it wants to panic if it gets this error, or it might try to make the call again with a different number of threads.

I think having an error for an invalid number of threads makes sense, as we probably want to have some upper bound on the number of threads as well. Does it really make sense for the application to request for example trillions of threads?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm now returning an error instead of panicking when num_worker_threads is set to 0.

@aditijannu
Copy link

I had been working the CI fixes here which you will face in your CI job too - #624
It is not complete yet and I need to fix one of the nitro crates used in rust-sgx.

@s-arash s-arash force-pushed the arash/num-worker-threads branch from da2817c to cc998ae Compare July 12, 2024 20:44
@s-arash s-arash force-pushed the arash/num-worker-threads branch from cc998ae to ea171ab Compare July 15, 2024 20:04
@s-arash s-arash changed the title Allow specifying number of worker threads when running an enclave Allow specifying number of worker threads used to run an enclave Jul 15, 2024
@s-arash
Copy link
Author

s-arash commented Jul 15, 2024

I had been working the CI fixes here which you will face in your CI job too - #624 It is not complete yet and I need to fix one of the nitro crates used in rust-sgx.

Thanks @aditijannu. I'll rebase this PR once your PR lands.

@s-arash s-arash requested a review from Goirad July 15, 2024 20:37
@s-arash s-arash force-pushed the arash/num-worker-threads branch from ea171ab to cbe6600 Compare July 18, 2024 16:31
arai-fortanix
arai-fortanix previously approved these changes Jul 18, 2024
intel-sgx/enclave-runner/src/loader.rs Outdated Show resolved Hide resolved
intel-sgx/enclave-runner/src/loader.rs Outdated Show resolved Hide resolved
}

/// Panics if you have previously called [`arg`] or [`args`].
/// Panics if you have previously called [`arg`], [`args`], or [`num_worker_threads`].
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With so many parameters that aren't shared between Command and Library enclaves, does it maybe make sense to have separate builders for Command and Library enclaves? That way, each type of builder could only have the parameters that make sense for the type of enclave it builds.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. But that would be a breaking change. I think it's better to release a sem-ver compatible version with this change, and leave this kind of refactoring to a future sem-ver breaking version.

@@ -332,21 +342,25 @@ impl<'a> EnclaveBuilder<'a> {
}

pub fn build<T: Load>(mut self, loader: &mut T) -> Result<Command, anyhow::Error> {
let num_worker_threads = self.num_worker_threads.unwrap_or_else(num_cpus::get);
anyhow::ensure!(num_worker_threads > 0, "`num_worker_threads` cannot be zero");
anyhow::ensure!(num_worker_threads <= 512, "`num_worker_threads` cannot exceed 512");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this needed at all? It might be unlikely but I don't think we should prevent people from spawning more threads than there are cores. It's just inefficient, but not wrong.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. That's a good point. But maybe we should enforce a reasonable maximum? It doesn't have to be 512, but maybe we should somehow prevent specifying something like 1 << 60 as num_worker_threads.

- fixed upper bound for num_worker_threads (64k)
- typo
@s-arash s-arash force-pushed the arash/num-worker-threads branch from 4fd27d4 to 2fa7a24 Compare July 25, 2024 20:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants