Skip to content

Comments

[MEL] - Pt.2 Integration of Message Extraction Across Arbnode Behind Feature Flag#4402

Open
rauljordan wants to merge 6 commits intomasterfrom
raul/mel-pr2-core
Open

[MEL] - Pt.2 Integration of Message Extraction Across Arbnode Behind Feature Flag#4402
rauljordan wants to merge 6 commits intomasterfrom
raul/mel-pr2-core

Conversation

@rauljordan
Copy link
Contributor

@rauljordan rauljordan commented Feb 18, 2026

This PR integrates MessageExtraction across the Nitro node gated behind the MEL feature flag configuration, allowing us to run a MEL-enabled node safely. Some of the changes are refactoring of certain variables to other packages to support MEL, but most changes are straightforward.

Depends on #4398

@github-actions
Copy link
Contributor

github-actions bot commented Feb 18, 2026

❌ 11 Tests Failed:

Tests completed Failed Passed Skipped
4235 11 4224 0
View the top 3 failed tests by shortest run time
TestDataStreaming_PositiveScenario/Many_senders,_long_messages
Stack Traces | 0.150s run time
... [CONTENT TRUNCATED: Keeping last 20 lines]
        	/home/runner/work/nitro/nitro/util/testhelpers/testhelpers.go:29 +0x9f
        github.com/offchainlabs/nitro/daprovider/data_streaming.testBasic.func1()
        	/home/runner/work/nitro/nitro/daprovider/data_streaming/protocol_test.go:230 +0x19b
        created by github.com/offchainlabs/nitro/daprovider/data_streaming.testBasic in goroutine 229
        	/home/runner/work/nitro/nitro/daprovider/data_streaming/protocol_test.go:223 +0x85
        
    protocol_test.go:230: �[31;1m [] too much time has elapsed since request was signed �[0;0m
INFO [02-24|13:56:04.075] rpc response                             method=datastreaming_start logId=46 err="too much time has elapsed since request was signed" result={} attempt=0 args="[\"0x699dadf3\", \"0x27\", \"0xd9\", \"0x20ab\", \"0xa\", \"0xdb83be028940fd0d947963ed3204442a1a71c07f8fdd3df30e9f1660479a11d93084cc3aa95fb57db20973542636aa8834de47dafc075b72ad01dd35afa2f00601\"]" errorData=null
    protocol_test.go:230: goroutine 242 [running]:
        runtime/debug.Stack()
        	/opt/hostedtoolcache/go/1.25.6/x64/src/runtime/debug/stack.go:26 +0x5e
        github.com/offchainlabs/nitro/util/testhelpers.RequireImpl({0x161f990, 0xc00049d6c0}, {0x16061e0, 0xc001706b40}, {0x0, 0x0, 0x0})
        	/home/runner/work/nitro/nitro/util/testhelpers/testhelpers.go:29 +0x9f
        github.com/offchainlabs/nitro/daprovider/data_streaming.testBasic.func1()
        	/home/runner/work/nitro/nitro/daprovider/data_streaming/protocol_test.go:230 +0x19b
        created by github.com/offchainlabs/nitro/daprovider/data_streaming.testBasic in goroutine 229
        	/home/runner/work/nitro/nitro/daprovider/data_streaming/protocol_test.go:223 +0x85
        
    protocol_test.go:230: �[31;1m [] too much time has elapsed since request was signed �[0;0m
--- FAIL: TestDataStreaming_PositiveScenario/Many_senders,_long_messages (0.15s)
TestDataStreaming_PositiveScenario
Stack Traces | 0.180s run time
=== RUN   TestDataStreaming_PositiveScenario
--- FAIL: TestDataStreaming_PositiveScenario (0.18s)
TestRedisProduceComplex/one_producer,_all_consumers_are_active
Stack Traces | 1.280s run time
... [CONTENT TRUNCATED: Keeping last 20 lines]
�[36mDEBUG�[0m[02-24|13:56:07.738] consumer: xack                           �[36mcid�[0m=a633bb70-d60e-4d9b-b4c7-dd380e31727d �[36mmessageId�[0m=1771941366626-1
�[36mDEBUG�[0m[02-24|13:56:07.738] consumer: xack                           �[36mcid�[0m=7be8b063-6eb4-4b54-b6ac-b0e60b8bfdd7 �[36mmessageId�[0m=1771941366628-0
�[36mDEBUG�[0m[02-24|13:56:07.738] consumer: xack                           �[36mcid�[0m=3f1ac184-013e-467f-a731-7e0c8009fa29 �[36mmessageId�[0m=1771941366625-4
�[36mDEBUG�[0m[02-24|13:56:07.738] consumer: xack                           �[36mcid�[0m=3f01d0c5-0bf9-4ff0-8919-ebc4e400eee1 �[36mmessageId�[0m=1771941366626-0
�[36mDEBUG�[0m[02-24|13:56:07.738] Redis stream consuming                   �[36mconsumer_id�[0m=21ffc215-562a-4463-a2d3-c036aaeca77f �[36mmessage_id�[0m=1771941366643-1
�[36mDEBUG�[0m[02-24|13:56:07.738] consumer: setting result                 �[36mcid�[0m=21ffc215-562a-4463-a2d3-c036aaeca77f �[36mmsgIdInStream�[0m=1771941366643-1  �[36mresultKeyInRedis�[0m=result-key:stream:26e5bbc9-a2e6-4e45-8f25-07c37bd0fb9f.1771941366643-1
�[36mDEBUG�[0m[02-24|13:56:07.738] consumer: xdel                           �[36mcid�[0m=aa22519f-84e4-4ec5-a5b2-0ad365f9c786 �[36mmessageId�[0m=1771941366625-5
�[36mDEBUG�[0m[02-24|13:56:07.738] consumer: xdel                           �[36mcid�[0m=7be8b063-6eb4-4b54-b6ac-b0e60b8bfdd7 �[36mmessageId�[0m=1771941366628-0
�[36mDEBUG�[0m[02-24|13:56:07.738] consumer: xdel                           �[36mcid�[0m=a633bb70-d60e-4d9b-b4c7-dd380e31727d �[36mmessageId�[0m=1771941366626-1
�[36mDEBUG�[0m[02-24|13:56:07.738] consumer: xdel                           �[36mcid�[0m=3f01d0c5-0bf9-4ff0-8919-ebc4e400eee1 �[36mmessageId�[0m=1771941366626-0
�[36mDEBUG�[0m[02-24|13:56:07.738] consumer: xack                           �[36mcid�[0m=21ffc215-562a-4463-a2d3-c036aaeca77f �[36mmessageId�[0m=1771941366643-1
�[36mDEBUG�[0m[02-24|13:56:07.738] consumer: xdel                           �[36mcid�[0m=21ffc215-562a-4463-a2d3-c036aaeca77f �[36mmessageId�[0m=1771941366643-1
�[36mDEBUG�[0m[02-24|13:56:07.739] consumer: xdel                           �[36mcid�[0m=3f1ac184-013e-467f-a731-7e0c8009fa29 �[36mmessageId�[0m=1771941366625-4
�[36mDEBUG�[0m[02-24|13:56:07.742] consumer: xack                           �[36mcid�[0m=9b69957a-c74e-4784-8a82-74cd7869c0f3 �[36mmessageId�[0m=1771941366643-0
�[36mDEBUG�[0m[02-24|13:56:07.743] consumer: xdel                           �[36mcid�[0m=9b69957a-c74e-4784-8a82-74cd7869c0f3 �[36mmessageId�[0m=1771941366643-0
�[36mDEBUG�[0m[02-24|13:56:07.806] checkResponses                           �[36mresponded�[0m=82 �[36merrored�[0m=0 �[36mchecked�[0m=100
�[36mDEBUG�[0m[02-24|13:56:07.811] redis producer: check responses starting
�[36mDEBUG�[0m[02-24|13:56:07.822] checkResponses                           �[36mresponded�[0m=18 �[36merrored�[0m=0 �[36mchecked�[0m=18
�[36mDEBUG�[0m[02-24|13:56:07.881] Error destroying a stream group          �[36merror�[0m="dial tcp 127.0.0.1:39579: connect: connection refused"
--- FAIL: TestRedisProduceComplex/one_producer,_all_consumers_are_active (1.28s)

📣 Thoughts on this report? Let Codecov know! | Powered by Codecov

@rauljordan rauljordan marked this pull request as ready for review February 20, 2026 21:12
@rauljordan rauljordan requested review from diegoximenes, eljobe and tsahee and removed request for tsahee February 20, 2026 21:12
@rauljordan rauljordan changed the title [MEL] - Integration of Message Extraction Across Arbnode Behind Feature Flag [MEL] - Pt.2 Integration of Message Extraction Across Arbnode Behind Feature Flag Feb 20, 2026
@codecov
Copy link

codecov bot commented Feb 20, 2026

Codecov Report

❌ Patch coverage is 33.73206% with 277 lines in your changes missing coverage. Please review.
✅ Project coverage is 32.73%. Comparing base (926fc28) to head (1e484c1).

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #4402      +/-   ##
==========================================
- Coverage   32.76%   32.73%   -0.04%     
==========================================
  Files         493      493              
  Lines       58360    58569     +209     
==========================================
+ Hits        19123    19170      +47     
- Misses      35893    36038     +145     
- Partials     3344     3361      +17     

Base automatically changed from raul/mel-pr1-core to master February 24, 2026 12:38
if inboxBatchCount > 0 {
var err error
prevBatchMeta, err = b.inbox.GetBatchMetadata(inboxBatchCount - 1)
if b.msgExtractor != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't love this pattern of deciding whether or not to use MEL based on whether or not there is a MsgExtractor or if it is nil. When I look at this code, it just smells like the perfect place for the batch_poster to define an interface it needs, and then for the code in NewBatchPoster to use either InboxTracker or melrunner.MessageExtractor based on the opts.

I'm thinking, in this file, the interface could be:

type l1BatchInformer interface {
    GetBatchMetadata(batchCount uint64)
    GetBatchCount()
}

Oh! I see one problem is that the mel implementation actually needs a context.Context in GetBatchcount. Maybe we could add an unused context.Context to InboxTracker's implementation?

I'm not 100% convinced of this. But, it seems like it could be a lot nicer if we can find a common interface to make the batch poster code only care about the actual type during construction.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed! Actually the MEL implementation has a GetContext() method and it has its own context, so we can easily remove context from the GetBatchCount method. Will check

StopWaiter: stopwaiter.StopWaiter{},
config: config,
inboxReader: inboxReader,
msgExtractor: msgExtractor,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, the ConsensusExecutionSyncer could just depend on a:

type msgCounter interface {
    GetSafeMsgCount(ctx context.Context)
    GetFinalizedMsgCount(ctx context.Context)
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this file, I can see that it's much harder to find a common interface for the melrunner.MessageExtractor and the InboxTracker. So, I'm not going to suggest that we actually try to do that. What I would like though is to pull out some of the mel-specific handling and the inboxTracker-specific handling handling into some helper functions. Mainly, this just helps with readability because, for example, the sequenceWithoutLockout function is getting hard to follow because of the multiple branch points.

coordinator *SeqCoordinator,
) (*DelayedSequencer, error) {
if inboxReader == nil && msgExtractor == nil {
return nil, nil
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be some sort of programming error (as opposed to just returning nil?)

log.Error("Error getting head state from mel", "err", err)
res["batchMetadataError"] = err.Error()
} else {
batchSeen, err := s.sequencerInbox.GetBatchCount(s.GetContext(), new(big.Int).SetUint64(headMelState.ParentChainBlockNumber+1))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The +1 at the end of this call just feels so much like a "magic number."
I always wonder, "Why?"

log.Error("Error getting head state from mel", "err", err)
res["batchMetadataError"] = err.Error()
} else {
batchSeen, err := s.sequencerInbox.GetBatchCount(s.GetContext(), new(big.Int).SetUint64(headMelState.ParentChainBlockNumber+1))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also feel like there is an opportunity for some good code-sharing given that batchSeen is also calculated below on line 293. I think it's the same logic. Can we "refactor: extract function" on this?

s.delayedBridge = delayedBridge
}

func (s *TransactionStreamer) SetMsgExtractor(msgExtractor *melrunner.MessageExtractor) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setter injection makes me nervous.

  1. Why can't we have the TransactionStreamer be correct by construction?
  2. It "seems" like there is an assumption later in the code that if inboxTracker is not nil, then it will be used. So, if a TransactionStreamer is constructed with an inboxTracker, and then, later SetMsgExtractor is set, the non-nil inboxTracker would mask the msgExtractor on line 1188. (Maybe this setter should also set the inboxTracker to nil, or error out if it's not nil?
  3. I really hate panics. Isn't there some other way to handle this?

@eljobe eljobe assigned rauljordan and unassigned eljobe Feb 24, 2026
@rauljordan
Copy link
Contributor Author

Amazing feedback and helpful comments @eljobe ! Thank you. I'll take care of this soon

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants