Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: Parallelize saveNewNodes' DB writes with figuring out "what to write" #889

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ValarDragon
Copy link
Contributor

This PR parallelizes to SaveNodes, and figuring out what it is we have to write. ("saveNewNodes").

We can improve this in the future to process eveything but "Set" in another goroutine, and then keep a buffered queue for "Set" that completes asynchronously.

If this is not useful with the IAVL v2 work, can I just put this in the IAVL v1 line?

This PR as is feels like a pretty straightforward improvement, that should give a 7% sync improvement on Osmosis for IAVL v1 today. I don't think theres any tests to add here, I don't see any edge case here thats not covered by existing tests.

Benchmark for 2000 blocks on IAVL v1 on osmosis mainnet for context:
image

This PR will drop the latency of this from 42 seconds to 24 seconds. However

We should be able to with subsequent work:

  • (No async commit) Parallelize latency for this function time to the longest of:
    • num nodes to hash * time to sha256 / num cores
    • DB writing all new nodes
  • And async commit removes the DB writing part

Better parallelism would make it:

  • No async commit be max(4 seconds, 18 seconds / num cores)
  • Async commit (18 seconds / num cores)

@ValarDragon ValarDragon requested a review from a team as a code owner February 25, 2024 00:44
Copy link

coderabbitai bot commented Feb 25, 2024

Walkthrough

The update introduces a significant enhancement to the MutableTree structure, specifically within its saveNewNodes method. By leveraging goroutines, the process of saving nodes now occurs concurrently, leading to improved performance. This approach is complemented by the use of channels for effective communication and error management. Additionally, the update ensures that nodes are properly detached and their keys are recursively assigned, optimizing the process for parallel execution and enhancing overall efficiency and error handling.

Changes

File(s) Summary of Changes
mutable_tree.go Introduced goroutines in saveNewNodes for parallel node saving, with channels for communication and error handling. Optimized for parallelization, including recursive key assignment and improved efficiency.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share

Tips

Chat

There are 3 ways to chat with CodeRabbit:

Note: Auto-reply has been disabled for this repository by the repository owner. The CodeRabbit bot will not respond to your comments unless it is explicitly tagged.

  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit tests for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository from git and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit tests.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

CodeRabbit Discord Community

Join our Discord Community to get help, request features, and share feedback.

Comment on lines +1031 to +1078
// TODO: Come back and figure out how to better parallelize this code.
func (tree *MutableTree) saveNewNodes(version int64) error {
nonce := uint32(0)
newNodes := make([]*Node, 0)
var recursiveAssignKey func(*Node) ([]byte, error)
recursiveAssignKey = func(node *Node) ([]byte, error) {

nodeChan := make(chan *Node, 64) // Buffered channel to avoid blocking.
doneChan := make(chan error, 1) // Channel to signal completion and return any errors.

// Start a goroutine to save nodes.
go func() {
var err error
for node := range nodeChan {
if saveErr := tree.ndb.SaveNode(node); saveErr != nil {
err = saveErr
break
}
node.leftNode, node.rightNode = nil, nil // Detach children after saving.
}
doneChan <- err // Send any error encountered or nil if none.
}()

var recursiveAssignKey func(*Node) []byte
recursiveAssignKey = func(node *Node) []byte {
if node.nodeKey != nil {
if node.nodeKey.nonce != 0 {
return node.nodeKey.GetKey(), nil
return node.nodeKey.GetKey()
}
return node.hash, nil
return node.hash
}
nonce++
node.nodeKey = &NodeKey{
version: version,
nonce: nonce,
nonce: nonce, // Example nonce calculation; adjust as needed.
}

var err error
// the inner nodes should have two children.
// Assign keys recursively to child nodes. (Two children are guaranteed)
if node.subtreeHeight > 0 {
node.leftNodeKey, err = recursiveAssignKey(node.leftNode)
if err != nil {
return nil, err
}
node.rightNodeKey, err = recursiveAssignKey(node.rightNode)
if err != nil {
return nil, err
}
node.leftNodeKey = recursiveAssignKey(node.leftNode)
node.rightNodeKey = recursiveAssignKey(node.rightNode)
}

node._hash(version)
newNodes = append(newNodes, node)
node._hash(version) // Assuming this hashes the node.
nodeChan <- node // Send node to be saved.

return node.nodeKey.GetKey(), nil
return node.nodeKey.GetKey()
}

if _, err := recursiveAssignKey(tree.root); err != nil {
recursiveAssignKey(tree.root)
close(nodeChan) // Close the channel on completion.
if err := <-doneChan; err != nil { // Wait for the saving goroutine to finish.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parallelization of the saveNewNodes function introduces concurrency into the node saving process, which is a significant change aimed at improving performance. However, there are several areas that need attention:

  1. Error Handling: The current implementation stops processing new nodes as soon as an error occurs (lines 1042-1045). This approach might leave some nodes unsaved without any attempt to save them. Consider whether it's acceptable to stop immediately or if it would be better to attempt saving all nodes and collect all errors.

  2. Detaching Children: Detaching children after saving (line 1046) is done within the goroutine. Ensure that this operation does not introduce any race conditions or inconsistencies, especially since the parent node's references to these children are cleared.

  3. Nonce Calculation: The nonce calculation (line 1062) seems to be a placeholder (nonce++;). Ensure that the nonce calculation logic is correctly implemented and serves its intended purpose. If the nonce is meant to ensure uniqueness or order, verify that this incrementation strategy is sufficient and safe under concurrent conditions.

  4. Recursive Key Assignment: The recursive assignment of keys to child nodes (lines 1051-1074) is a critical operation. It's important to ensure that this recursion does not introduce any performance bottlenecks or stack overflow issues for trees with significant depth. Additionally, verify that the operation of assigning keys and subsequently hashing nodes (line 1071) is correct and does not lead to any unintended side effects.

  5. Channel and Goroutine Management: The use of a buffered channel (line 1035) and a single goroutine (lines 1038-1049) for saving nodes is a straightforward approach to parallelization. However, consider the buffer size of the channel and whether it's adequately sized for the expected workload. Also, ensure that the goroutine's error handling and channel closing logic (lines 1077-1078) are robust and won't lead to goroutine leaks or panic due to double closing channels.

  6. Concurrency and Data Races: Given the introduction of concurrency, it's crucial to ensure that there are no data races, especially concerning the access and modification of node properties. Use tools like the Go race detector to verify that the implementation is safe.

Overall, while the parallelization effort is commendable for its potential to improve performance, careful consideration must be given to error handling, concurrency issues, and the correctness of the implementation.

Consider reviewing the error handling strategy, ensuring the safety of detaching children, verifying the nonce calculation logic, assessing the performance and correctness of recursive key assignment, and ensuring robust channel and goroutine management to prevent leaks or panics.

@ValarDragon
Copy link
Contributor Author

Note that this code preserves functionality, as the recursive loop just builds a list of newNodes, and then we just one-by-one serially call SaveNode on it. So we still have the serial SaveNode behavior

@ValarDragon ValarDragon changed the title Parallelize saveNewNodes' DB writes with figuring out "what to write" perf: Parallelize saveNewNodes' DB writes with figuring out "what to write" Feb 25, 2024
@kocubinski kocubinski self-assigned this Feb 26, 2024
@ValarDragon
Copy link
Contributor Author

We've tested this gave a speedup on IAVL v1 on Osmosis!

recursiveAssignKey = func(node *Node) ([]byte, error) {

nodeChan := make(chan *Node, 64) // Buffered channel to avoid blocking.
doneChan := make(chan error, 1) // Channel to signal completion and return any errors.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is some overhead with creating channels and goroutine, although likely not so big, it might be better to couple channel and goroutine creation with the lifecycle of MutableTree instead of saveNewNodes.

Since I don't know off hand how much overhead that actually is maybe it's fine as-is too.

@kocubinski
Copy link
Member

I love this conceptually, write nodes to disk as the tree is hashed and node keys are generated in parallel.

I guess one draw back is failing partway through tree traversal - those nodes are now possibly orphaned.

}
nonce++
node.nodeKey = &NodeKey{
version: version,
nonce: nonce,
nonce: nonce, // Example nonce calculation; adjust as needed.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just realized the original codebase is wrong, not a bug but just a performance issue.
newNodes = append(newNodes, node) should come after this line, the main idea of newNodes slice is to save the node in a sorted manner by the nodekey (here the nonce) to reduce the compaction.

Just worried we can't assume this sorted manner after refactoring ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants