-
Notifications
You must be signed in to change notification settings - Fork 130
feat(mempool): InsertTx and ReapTxs handlers #872
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: feat/krakatoa
Are you sure you want to change the base?
Conversation
…removal or drop from within promote/demote executables checks
0ef8a66 to
1f86bea
Compare
| verifyFunc: func() { | ||
| mpool := s.network.App.GetMempool() | ||
| s.Require().Equal(1, mpool.CountTx()) | ||
| s.Require().Equal(0, mpool.CountTx()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this broke with the previous anteHandler pr but didnt notice. This tx is now below the base fee so running the anteHandlers drops it from the pool when we try and promote from queued to pending, this is expected.
…ap list functionality
|
|
||
| // execute the anteHandlers on our new context, and get a context that has | ||
| // the anteHandler updates written to it. | ||
| if _, err := m.anteHandler(ctx, tx, false); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
system tests are already broken but this is going to break system tests as well if there is a check tx running in the rpc call path, then we use the same check tx state here, we are going to get nonce errors since the same tx is being checked twice.
| if err != nil { | ||
| m.logger.Error("failed to remove Cosmos transaction", "error", err) | ||
| } else { | ||
| m.reapList.DropCosmosTx(tx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a sight issue here with the priority nonce mempool and reap list interaction that I am choosing to ignore and made a followup ticket to address. But the priority nonce mempool removes txs from it when it gets one with a higher priority. There is no way to hook into that process and drop from the reap list when those txs are removed. I am just ignoring this for now since this shouldn't affect our testing, but is something to note.
| // TODO: What context do we use for this checktx? Do we want to share a | ||
| // context with the one that is running inside of the legacypool and being | ||
| // used to verify eth txs? Currently this context is the Check context and | ||
| // is completely separate from the context used to verify EVM txs. If we | ||
| // share a context here, then we would to block this on the verifications | ||
| // running inside of the legacypool, which may take awhile... but sharing a | ||
| // context between the two seems more correct. Note that the legacypool | ||
| // also uses two separate contexts right now (one for queued verification | ||
| // and one for pending. which one would we share with? should we combine | ||
| // the two? we cant directly a context between the two, since we may verify | ||
| // the same tx twice at the same height which would error for a valid tx, | ||
| // maybe we need some cache to ensure this doesnt happen, then they could | ||
| // be shared and we skip double verifications.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a larger note I am thinking about, how we manage the state in all of the new checks that we are introducing. Does it ever make sense to share the state, should they all be separate, etc. For this check I am choosing to use the actual check state since these txs are essentially being 'streamed' in from an rpc, so are able to maintain the state of prev txs only if we use and write to the check state. For the legacypool ante handlers we are starting from a fresh block state every run and running all txs in the individual pools against that fresh state.
I think there are some optimizations we can make where they can share a single state and we then dont have to run each tx multiple times potentially, but then we are going to introduce some locking between all of the different checks which may further block proposals. Just a thought.
|
|
||
| type ReapList struct { | ||
| // txs is a list of transactions and their respective hashes | ||
| // NOTE: this currently has unbound size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this can be handled in a future pr if we want to change, but we are just relying on the fact that we will be continuously deleting from this list on tx removals/drops, and will not insert into it if the mempool becomes full and we stop accepting txs. so this is implicitly tied to the size of the mempool(s).
| type ReapList struct { | ||
| // txs is a list of transactions and their respective hashes | ||
| // NOTE: this currently has unbound size | ||
| txs []*txWithHash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reap list now handles evm and cosmos txs
Description
Adds handlers for
ReapTxsandInsertTx.This slightly differs from the original design as it does not have a separate queue to process txs received from
InsertTxbefore validating them via a goroutine and adding to the mempool. This queue was serving the exact same purpose as thequeuemempool in thelegacypool, so I opted to use that as the holding area for invalid txs (which is what it already is). A tx is considered validated once it has been promoted from thequeuepool to thependingpool. When this happens a tx will now be added to areapList(via theOnPromoteTxcallback on thelegacypool) that manages the next set of txs to be returned from the next call ofReapTxs.An edge case here is that it is possible for a tx to be promoted multiple times from the
queuepool to thependingpool. Thus we have to be careful and include areapGuardin theExperimentalEVMMempoolto protect against having the same tx added to thereapListafter already being returned.A tx is only removed from the
reapListif it is returned via a call toReapTxs(in this case we do not remove from thereapGuardsince it may still be in the mempool and be promoted again), or it is dropped from the mempool entirely (in this case we do remove from thereapGuardsince the tx is no longer in the mempool and it will not be promoted again).I opted for the simplest implementation possible for the
reapList. I expect for there to be some performance issues with it at scale and there are very obvious optimizations we can make (a doubly linked list instead of a slice is probably a very simple optimization to speed up the commonInsertcase, but slow down the less commonDropcase. If we change to a linked list we can eliminate most of the locking betweenReapandInsert. Caching encoded tx bytes so we dont do it on Reap, etc), but I simply wanted to keep it as minimal as possible before over optimizing.For cosmos txs, we take a much simpler approach, we call CheckTx (anteHandlers) in the hot path before inserting on the check state (so this will break if we run checktx to eventually get to our insert call as well because of the double check). Then add to the reap list after successful insertion into the cosmos mempool, and remove from the reap list when
Removeis called for cosmos txs. There is a slight issue here with the priority nonce mempool since txs can be dropped withoutRemovebeing called if they are replaced by a higher priority nonce tx, but we are choosing to ignore that for now.Closes: #XXXX
Author Checklist
All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.
I have...
mainbranch