-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mutexmap #95
Mutexmap #95
Conversation
Signed-off-by: Elena Kolevska <[email protected]>
Signed-off-by: Elena Kolevska <[email protected]>
Signed-off-by: Elena Kolevska <[email protected]>
Signed-off-by: Elena Kolevska <[email protected]>
Signed-off-by: Elena Kolevska <[email protected]>
Signed-off-by: Elena Kolevska <[email protected]>
Signed-off-by: Elena Kolevska <[email protected]>
Signed-off-by: Elena Kolevska <[email protected]>
In general, the code seems good... But what is this going to be used for? We already have other libraries used in the codebase that support thread-safe maps and do that in a more performant way too. For example, we already use |
Co-authored-by: Cassie Coyle <[email protected]> Signed-off-by: Elena Kolevska <[email protected]>
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #95 +/- ##
==========================================
- Coverage 79.94% 79.83% -0.11%
==========================================
Files 56 62 +6
Lines 4382 4760 +378
==========================================
+ Hits 3503 3800 +297
- Misses 733 805 +72
- Partials 146 155 +9 ☔ View full report in Codecov by Sentry. |
Signed-off-by: Elena Kolevska <[email protected]>
Signed-off-by: Elena Kolevska <[email protected]>
Signed-off-by: Elena Kolevska <[email protected]>
Thanks @ItalyPaleAle , haxmap looks great, I wasn't aware of it. I updated the PR in runtime to use it instead and removed the atomic map from kit. I'd like to keep the mutexmap though, because it has some more specialised methods that keep the code in runtime cleaner. |
.gitignore
Outdated
**/.DS_Store | ||
.idea | ||
.vscode | ||
.vs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: newline
Signed-off-by: Elena Kolevska <[email protected]>
Co-authored-by: Josh van Leeuwen <[email protected]> Signed-off-by: Elena Kolevska <[email protected]>
Just keep in mind one important thing about haxmap. If an item is added to the map while it's being resized internally, the item shows up only after the resizing is done. See here Would this be a problem? |
Signed-off-by: Elena Kolevska <[email protected]>
Thanks for pointing that out. Does this also relate to GetOrSet? Will it block until the resize is complete, or? |
It won't block. How haxmap works is that it allocates memory to store the items. If more items are needed than the memory that's available, it will need to allocate a new block of memory, copy the items, and then move the pointer to the new memory. Haxmap doesn't block. As the docs say:
If you know in advance how many items you'll have, you can pre-allocate the haxmap with enough capacity for them. |
Unfortunately, we don't know the number of elements, it's going to depend on how users use the system (specifically, for now, it will be the number of namespaces). I suppose we could set the initial map size to a higher starting number of ~50, or make the size a placement service start up parameter, but I don't really like either of these options. I'm concerned that a non-replicated data structure can't give me a read-your-write consistency. I didn't dig deep into the paper and the inner structure, and I'm sure if it were easy, it would have been done, but this doesn't look like an unsolvable problem. For example, the resize could be triggered in the background when the size reaches X% of full capacity, and even if we still get to a point where a write has to happen before the resize completes, the write operation shouldn't be acknowledged to the user before the resize has completed, for correctness. Since we can have multiple sidecars from the same namespace trying to send their updates at the same time, we might indeed have a situation where this issue affects us. Let me know if I'm understanding this correctly, or missing something. |
Looking at this issue now, there also seems to be a race on the |
+1 to use sync.Mutex map. |
Elena needs to undo the changes to move back to sync.Mutex map.
This reverts commit 20ca9ad. Signed-off-by: Elena Kolevska <[email protected]>
Thanks @artursouza , I reverted it. |
Signed-off-by: Elena Kolevska <[email protected]>
@artursouza you merged this PR when there was only 1 vote. The comment should not have been resolved as the conversation was not over. If the problem was the need to support both int64 and uint32, you don't need to duplicate the code. You can still use the atomic pattern, internally using only |
Can you provide the code for that? I did hit limitations in the casting capabilities of Go - maybe there is a viable path. I am happy to take another PR on this but I don't think this possible improvement justifies blocking progress on 1.14, given the deadline. |
Description
This PR adds a map of mutexes that provide an inner and outer locking mechanism to support keeping separate locks on dynamic objects, as is the case with namespaced stream pools, for example.
Checklist
Please make sure you've completed the relevant tasks for this PR, out of the following list: