-
Notifications
You must be signed in to change notification settings - Fork 1.2k
refactor: replace exclusive locks with shared locks in CInstantSendDb #6963
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
Updated CInstantSendDb to use shared locks instead of exclusive locks for read operations, improving concurrency. This change affects methods such as KnownInstantSendLock, GetInstantSendLockCount, and others, ensuring that multiple threads can read from the database simultaneously without blocking each other. The cs_db mutex has also been changed to a SharedMutex to support this new locking strategy.
✅ No Merge Conflicts DetectedThis PR currently has no conflicts with other open PRs. |
| size_t CInstantSendDb::GetInstantSendLockCount() const | ||
| { | ||
| LOCK(cs_db); | ||
| READ_LOCK(cs_db); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is internal db object (leveldb::DB* pdb;) ready for parallel reads?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
levelDB appears to say it is internally safe: https://github.com/google/leveldb/blob/main/doc/index.md#concurrency
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gpt agrees
'''
Looked at dbwrapper.h to confirm how CDBWrapper uses LevelDB (leveldb::DB* pdb).
Yes: leveldb::DB supports concurrent reads. Multiple threads can call DB::Get and create/use independent iterators concurrently without external locking.
Don’t share iterators across threads. Each thread must use its own leveldb::Iterator.
Reads and writes may run concurrently; writes are serialized internally. For a consistent view across keys during concurrent writes, use snapshots (ReadOptions::snapshot).
In our wrapper, pdb->Get(...), pdb->NewIterator(iteroptions), and pdb->GetApproximateSizes(...) map directly to LevelDB calls and are safe for parallel reads. The shared readoptions/iteroptions structs are read-only.
Caveat: while LevelDB reads are thread-safe, our higher-level caches (e.g., unordered_lru_cache, Uint256LruHashMap) are not thread-safe on their own. They still need protection by cs_db even during read paths that update the caches.
'''
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I am reviewing instantsent/db just in case if we can drop somewhere cs_db completely.
It seems as not easy due to islockCache which is used almost everywhere.
Feasible candidates are:
CInstantSendDb::RemoveInstantSendLockin case ofkeep_cache=trueCInstantSendDb::WriteInstantSendLockMined
Also see #6964
WalkthroughThis change optimizes concurrency in the instantsend database layer by converting the primary synchronization primitive from an exclusive Sequence DiagramssequenceDiagram
participant T1 as Thread 1
participant T2 as Thread 2
participant DB as InstantSend DB<br/>(SharedMutex)
rect rgb(200, 220, 255)
Note over T1,DB: BEFORE: Exclusive Access
T1->>DB: LOCK(cs_db) — Query
Note over DB: Exclusive lock held
T2->>DB: LOCK(cs_db) — Query (blocked)
T1->>DB: Release lock
T2->>DB: Acquires lock, Query executes
end
rect rgb(220, 250, 220)
Note over T1,DB: AFTER: Shared Reads + Exclusive Writes
T1->>DB: READ_LOCK(cs_db) — Query
Note over DB: Shared lock held
T2->>DB: READ_LOCK(cs_db) — Query (concurrent)
Note over DB: Both threads read simultaneously
par
T1->>DB: Release read lock
T2->>DB: Release read lock
end
T1->>DB: LOCK(cs_db) — Write
Note over DB: Exclusive lock for mutation
T1->>DB: Release lock
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (2)
🧰 Additional context used📓 Path-based instructions (1)src/**/*.{cpp,h,cc,cxx,hpp}📄 CodeRabbit inference engine (CLAUDE.md)
Files:
🧠 Learnings (9)📓 Common learnings📚 Learning: 2025-09-09T21:36:11.833ZApplied to files:
📚 Learning: 2025-10-21T11:09:34.688ZApplied to files:
📚 Learning: 2025-09-09T21:36:58.969ZApplied to files:
📚 Learning: 2025-09-09T21:36:11.833ZApplied to files:
📚 Learning: 2025-09-09T21:36:31.504ZApplied to files:
📚 Learning: 2025-09-09T21:36:58.969ZApplied to files:
📚 Learning: 2025-01-02T21:50:00.967ZApplied to files:
📚 Learning: 2025-09-09T21:36:31.504ZApplied to files:
🧬 Code graph analysis (2)src/instantsend/db.cpp (1)
src/instantsend/db.h (2)
🪛 GitHub Actions: Clang Diff Format Checksrc/instantsend/db.h[error] 73-78: Clang format differences found. The diff tool reported formatting differences and the step exited with code 1. Run clang-format/clang-format-diff.py to fix formatting in this file. ⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| void WriteBlockInstantSendLocks(const gsl::not_null<std::shared_ptr<const CBlock>>& pblock, gsl::not_null<const CBlockIndex*> pindexConnected) EXCLUSIVE_LOCKS_REQUIRED(!cs_db); | ||
| void RemoveBlockInstantSendLocks(const gsl::not_null<std::shared_ptr<const CBlock>>& pblock, gsl::not_null<const CBlockIndex*> pindexDisconnected) EXCLUSIVE_LOCKS_REQUIRED(!cs_db); | ||
| bool KnownInstantSendLock(const uint256& islockHash) const EXCLUSIVE_LOCKS_REQUIRED(!cs_db); | ||
| bool KnownInstantSendLock(const uint256& islockHash) const LOCKS_EXCLUDED(cs_db); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why this was changed? (same for other similar changes below)
| InstantSendLockPtr CInstantSendDb::GetInstantSendLockByHashInternal(const uint256& hash, bool use_cache) const | ||
| { | ||
| AssertLockHeld(cs_db); | ||
| AssertSharedLockHeld(cs_db); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you actually do this? we update islockCache in this method
| uint256 CInstantSendDb::GetInstantSendLockHashByTxidInternal(const uint256& txid) const | ||
| { | ||
| AssertLockHeld(cs_db); | ||
| AssertSharedLockHeld(cs_db); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same but for txidCache
Issue being fixed or feature implemented
cs_db isn't super contended, only a few percent of overall contention time, around 3-4% of contentions, however, if we look at the spots where it's contended each of them could be replaced with a read lock. This should reduce contention over cs_db to near 0. While this isn't a huge gain; it's valuable to avoid context switching or stalls in the islock hot loop.
What was done?
Updated CInstantSendDb to use shared locks instead of exclusive locks for read operations, improving concurrency. This change affects methods such as KnownInstantSendLock, GetInstantSendLockCount, and others, ensuring that multiple threads can read from the database simultaneously without blocking each other. The cs_db mutex has also been changed to a SharedMutex to support this new locking strategy.
How Has This Been Tested?
Builds locally
Breaking Changes
Checklist:
Go over all the following points, and put an
xin all the boxes that apply.