Skip to content

Conversation

@mattac21
Copy link
Contributor

@mattac21 mattac21 commented Dec 2, 2025

Description

Adds handlers for ReapTxs and InsertTx.

This slightly differs from the original design as it does not have a separate queue to process txs received from InsertTx before validating them via a goroutine and adding to the mempool. This queue was serving the exact same purpose as the queue mempool in the legacypool, so I opted to use that as the holding area for invalid txs (which is what it already is). A tx is considered validated once it has been promoted from the queue pool to the pending pool. When this happens a tx will now be added to a reapList (via the OnPromoteTx callback on the legacypool) that manages the next set of txs to be returned from the next call of ReapTxs.

An edge case here is that it is possible for a tx to be promoted multiple times from the queue pool to the pending pool. Thus we have to be careful and include a reapGuard in the ExperimentalEVMMempool to protect against having the same tx added to the reapList after already being returned.

A tx is only removed from the reapList if it is returned via a call to ReapTxs (in this case we do not remove from the reapGuard since it may still be in the mempool and be promoted again), or it is dropped from the mempool entirely (in this case we do remove from the reapGuard since the tx is no longer in the mempool and it will not be promoted again).

I opted for the simplest implementation possible for the reapList. I expect for there to be some performance issues with it at scale and there are very obvious optimizations we can make (a doubly linked list instead of a slice is probably a very simple optimization to speed up the common Insert case, but slow down the less common Drop case. If we change to a linked list we can eliminate most of the locking between Reap and Insert. Caching encoded tx bytes so we dont do it on Reap, etc), but I simply wanted to keep it as minimal as possible before over optimizing.

For cosmos txs, we take a much simpler approach, we call CheckTx (anteHandlers) in the hot path before inserting on the check state (so this will break if we run checktx to eventually get to our insert call as well because of the double check). Then add to the reap list after successful insertion into the cosmos mempool, and remove from the reap list when Remove is called for cosmos txs. There is a slight issue here with the priority nonce mempool since txs can be dropped without Remove being called if they are replaced by a higher priority nonce tx, but we are choosing to ignore that for now.

Closes: #XXXX


Author Checklist

All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.

I have...

  • tackled an existing issue or discussed with a team member
  • left instructions on how to review the changes
  • targeted the main branch

@mattac21 mattac21 marked this pull request as ready for review December 3, 2025 16:20
verifyFunc: func() {
mpool := s.network.App.GetMempool()
s.Require().Equal(1, mpool.CountTx())
s.Require().Equal(0, mpool.CountTx())
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this broke with the previous anteHandler pr but didnt notice. This tx is now below the base fee so running the anteHandlers drops it from the pool when we try and promote from queued to pending, this is expected.


// execute the anteHandlers on our new context, and get a context that has
// the anteHandler updates written to it.
if _, err := m.anteHandler(ctx, tx, false); err != nil {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

system tests are already broken but this is going to break system tests as well if there is a check tx running in the rpc call path, then we use the same check tx state here, we are going to get nonce errors since the same tx is being checked twice.

if err != nil {
m.logger.Error("failed to remove Cosmos transaction", "error", err)
} else {
m.reapList.DropCosmosTx(tx)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a sight issue here with the priority nonce mempool and reap list interaction that I am choosing to ignore and made a followup ticket to address. But the priority nonce mempool removes txs from it when it gets one with a higher priority. There is no way to hook into that process and drop from the reap list when those txs are removed. I am just ignoring this for now since this shouldn't affect our testing, but is something to note.

Comment on lines +314 to +326
// TODO: What context do we use for this checktx? Do we want to share a
// context with the one that is running inside of the legacypool and being
// used to verify eth txs? Currently this context is the Check context and
// is completely separate from the context used to verify EVM txs. If we
// share a context here, then we would to block this on the verifications
// running inside of the legacypool, which may take awhile... but sharing a
// context between the two seems more correct. Note that the legacypool
// also uses two separate contexts right now (one for queued verification
// and one for pending. which one would we share with? should we combine
// the two? we cant directly a context between the two, since we may verify
// the same tx twice at the same height which would error for a valid tx,
// maybe we need some cache to ensure this doesnt happen, then they could
// be shared and we skip double verifications.)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a larger note I am thinking about, how we manage the state in all of the new checks that we are introducing. Does it ever make sense to share the state, should they all be separate, etc. For this check I am choosing to use the actual check state since these txs are essentially being 'streamed' in from an rpc, so are able to maintain the state of prev txs only if we use and write to the check state. For the legacypool ante handlers we are starting from a fresh block state every run and running all txs in the individual pools against that fresh state.

I think there are some optimizations we can make where they can share a single state and we then dont have to run each tx multiple times potentially, but then we are going to introduce some locking between all of the different checks which may further block proposals. Just a thought.


type ReapList struct {
// txs is a list of transactions and their respective hashes
// NOTE: this currently has unbound size
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this can be handled in a future pr if we want to change, but we are just relying on the fact that we will be continuously deleting from this list on tx removals/drops, and will not insert into it if the mempool becomes full and we stop accepting txs. so this is implicitly tied to the size of the mempool(s).

type ReapList struct {
// txs is a list of transactions and their respective hashes
// NOTE: this currently has unbound size
txs []*txWithHash
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reap list now handles evm and cosmos txs

@mattac21 mattac21 requested a review from swift1337 December 4, 2025 21:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants