Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase the rebroadcast queue size #235

Open
bartolkaruza opened this issue Mar 14, 2024 · 2 comments
Open

Increase the rebroadcast queue size #235

bartolkaruza opened this issue Mar 14, 2024 · 2 comments

Comments

@bartolkaruza
Copy link

Problem

const MAX_TRANSACTION_RETRY_POOL_SIZE: usize = 10_000; // This seems like a lot but maybe it needs to be bigger one day

We're seeing a lot of dropped transactions on RPC nodes, with the documented suggested approach to solving this here working as a work around, but it only amplifies the problem;

while (blockheight < lastValidBlockHeight) {
  connection.sendRawTransaction(rawTransaction, {
    skipPreflight: true,
  });
  await sleep(500);
  blockheight = await connection.getBlockHeight();
}

So if my reasoning is correct, we get dropped transactions unless we keep repeatedly spamming the rpc nodes with transactions, so that we can make it within the 10k re-broadcast queue limit at some point. This approach is a likely cause of that limit not being sufficient.

Proposed Solution

Something that might alleviate this pressure is checking if these transactions are identical and only counting unique transactions for this queue limit?

const MAX_TRANSACTION_RETRY_POOL_SIZE: usize = 10_000; // This seems like a lot but maybe it needs to be bigger one day

@t-nelson
Copy link

it would probably be more appropriate to make this configurable from the command line than change the default behavior

@bartolkaruza
Copy link
Author

bartolkaruza commented Mar 15, 2024

I agree but it seems all the RPC node providers we tried seem to be running into frequent dropped transaction issues for the last months, so I imagine they are all running this same default? The default may be causing a lot of hard to diagnose issues for their users. If sufficient people try the workaround, of spamming send transactions until it gets accepted, this low default might be making the problem bigger.

We are not running an RPC node, we are running a startup that uses RPC nodes from well known providers on paid plans. So we are experiencing this issue down stream and I imagine many protocols and startups like us are experiencing the same, with the knee jerk reaction being to blame it on Solana outages/unreliability.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants