-
-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add single-threaded mode #214
Comments
I think this would be a good addition for single-threaded games where the thread spins continuously.
This is clearly the main drawback of this approach. The introduced delay will depend on the polling period achieved by the user. This may induce losses depending on the size of the UDP buffer (which is constrained by OS settings). As you said the sending pattern will mostly determine if this is an issue or not. If it features big bursts, you may run into issues, otherwise you should be fine.
|
Okay cool. I think the solution would be to just document the downsides appropriately. The OS also might have some facility to tell you if you're mass purging packets too which would be a nice warning, but I'd have to check.
This is how I assumed it was working I got confused when I noticed there was no branching on I looked into implementing this and I think the simplest way forward would be to augment the implementation of
|
You'd indeed need to add a new function to the API, for instance
I think it should be separate (in a dedicated
Yes,
In the absence of thread context, you'd need to keep the |
Thanks for the info it really helped me get a starting point to work from. I'm partly through the implementation. I was curious about one thing. It seems like locks wouldn't be needed for this new mode since everything is done on one thread (send/recv). However looking closer at the codebase resolving hostnames was moved to a background thread since it could block. I think an atomic bool could be used there instead of a lock and then I could make
Well after thinking about this for awhile making the code lock-free there would doable just a bit annoying. I think I could be lazy and basically add an typedef struct conn_mode_entry {
int (*registry_init_func)(conn_registry_t *registry, udp_socket_config_t *config);
// ...
int flags;
mutex_t mutex;
conn_registry_t *registry;
} conn_mode_entry_t; to each concurrency mode then have either
... actually I think I've got it figured out I should have a PR soon |
There is indeed an issue with server name resolution as it may be run in a dedicated thread. The issue is not only with synchronization inside libjuice, but also that the gathering done callback may be triggered from the thread. Making the resolution always synchronous is indeed a quick fix but the behavior is not optimal. Another simple approach could be never triggering callbacks like gathering done in the resolver thread, but doing it in update instead (which is called from user thread for user poll). In thst case you'd need to keep the locking though. I think it might not be worth removing the locking since the mutex overhead should be negligible if there is no contention. I guess in this use case the lock frequency will typically be around 100 Hz, maybe 1000 Hz at the maximum, while it would need to be orders of magnitude higher for the performance impact to become significant on a typical OS. The syscalls for sending and receiving should be a performance bottleneck way before the mutexes.
Nice job, I'll gladly review it. |
Thanks for getting back to me. I agree making locking calls for this mode ineffective would be confusing, and could lead to bugs if more concurrent paths are added internally to libjuice. I'm not sure I understand the point you're making about the gathering done callback. Are you referring to if I attempted a lock-free but still concurrent refactor? In any case it doesn't really matter. I will keep the locking in since like you say it's a small overhead for user mode locks with little contention. On a related note is there any reason why you're using the Win32 |
,,,Actually its getting a little tricky to add locks back in. I'm very tempted to go back to the synchronous solution. The problem is the lock can't be per agent (too much overhead; more potential for lock aquisition-order deadlocks) nor in the registry (you end up with nearly the same thing as |
My point is that if you keep the locks and the asynchronous name resolution as it is, the gathering done callback may be called in the resolver thread instead of the user pool thread, which the user doesn't expect. Since there won't be synchronization on user side, it may cause a race condition in user code.
The reason is that on Windows critical sections are not recursive whereas mutexes are, if I'm not mistaken (the API design looks quite inconsistent to me compared to pthreads). libjuice uses a recursive mutex to make the API reentrant and allow calling functions from callbacks. This is not the most efficient design but it keeps things simpler. In pratice, a possible enhancement could be using critical sections for normal mutexes and windows mutexes for recursive ones.
My understanding was that all agents |
Thanks for getting back to me so quickly.
I'm still not following unfortunately. If When you say
According to this section from MSDN...
it sounds like
For me my whole motivation was removing that single synchronization point otherwise I'd have just used For the locking implementation current thought is:
I don't need the agent sharing part. I figured I'd describe it since I already commented about it. If you remove that and use If we go with the synchronous non-locking option I could make a PR on a soonish timeframe. If the locking and async name resolution is essential to the implementation then I could just maintain this feature as a separate shallow fork for the time being and if you or I come up with a good solution then I'll make a PR. It ultimately comes down to what you think is an acceptable implementation since you will mostly be the one maintaining it. |
Sorry if I wasn't clear. My point was simply that if So I was mentionning that while calling the gathering done callback from the resolver thread is currently fine, it would need to be changed in single-threaded mode. It's no big deal as making the resolution synchronous is a way to solve the issue.
Oh, I missed that. Thanks for the details, I'll update the implementation to use critical sections on Windows then.
This sounds extremely convoluted to me. Also, I wouldn't like exposing platform-dependent internal types like mutexes in the API.
I think the synchronous non-locking option is fine. However it is still unclear to me how you plan to implement the sets of agents. Why not simply making agents independent and polled separately with a |
Okay I see now yeah the callback definitely shouldn't be called from another thread since that would partially defeat the purpose of manually polling. Thanks for explaining it clearly to me.
Cool it definitely makes me less concerned about performance issues since it basically makes it a counter instead of a system call.
Yeah I am in agreement on this.
At least in the non-locking case this is easy to solve because you just have to trust the user doesn't access an agent concurrently or do some operation that would cause concurrent access. If you're talking about the locking case I don't really know if you can in a nice way, but it doesn't really matter since using a single agent in
Yeah I think this would work well for my use case. I could add back locking since a userland lock is virtually no cost and wouldn't be a global lock/single synchronization-point Sorry it's taken awhile to get back I've been distracted by the other things I've been working on. I have a seemingly working implementation (at least for 1 agent). I'm not ready to merge anything yet though for a few reasons. I think once I finish the netcode for the project I'm working on then I'll be ready to submit a PR. Thanks for the help though I wouldn't have known how to approach implementing this without your guidance. |
This would entail adding an additional mode to
juice_concurrency_t
... something likeJUICE_CONCURRENCY_MODE_POLL_AGENT_DIRECTLY
Basically this would mean the user of libjuice would be in charge of polling the agent to get received packets
Here are some of the potential issues I'm aware of
Here are some benefits
JUICE_CONCURRENCY_MODE_POLL
will solve this problem as well. After looking through the implementation though it seems like this behavior isn't implemented yet so the default mode always behaves likeJUICE_CONCURRENCY_MODE_THREAD
(I might be wrong about this)I might be missing a few things here since my knowledge of networking is incomplete. If the only thing preventing this is something like the few potential issues I listed above then I think punting them to the user of libjuice is reasonable.
The text was updated successfully, but these errors were encountered: