fix(FLOW-10): Enforce fair FIFO queue ordering on-chain#46
fix(FLOW-10): Enforce fair FIFO queue ordering on-chain#46
Conversation
ebe63d0 to
8ad764e
Compare
9a20c8f to
5717ad2
Compare
74c948d to
371f132
Compare
Review: fix(FLOW-10): Enforce fair FIFO queue ordering on-chainThe mapping-based queue (head/tail pointers) is the right data structure for O(1) enqueue/dequeue, and the FIFO enforcement direction is correct. Four concrete issues worth addressing before merge: 1. Undocumented ordering constraint on both input arrays (High)
This constraint is nowhere in the NatSpec. The Cadence 2.
|
71b9aeb to
9d7d107
Compare
1760973 to
95d1302
Compare
95d1302 to
0da95a3
Compare
0da95a3 to
76c7bec
Compare
| } | ||
|
|
||
| revert RequestProcessOutOfOrder(requestId, reqId); | ||
| } |
There was a problem hiding this comment.
Undocumented ordering constraint + misleading error payload
The validation loop enforces that both successfulRequestIds and rejectedRequestIds must be in FIFO queue order — i.e., they must individually form sorted subsequences of the queue. If either array lists requests out of queue order, the function reverts here.
This constraint is not documented anywhere in the NatSpec. The Cadence preprocessRequests must preserve FIFO order in each output array; if it ever groups or sorts by any other criterion, all batch transactions will spuriously revert.
The error payload is also confusing: RequestProcessOutOfOrder(requestId, reqId) where reqId is the last non-matching ID tried (either from successfulRequestIds[j] or rejectedRequestIds[k] — whichever was tested last), not the "processedId" the error name implies. A caller decoding the error will see a value that has no obvious relationship to the request being processed.
Suggested fixes:
- Add
@devNatSpec: both arrays must be in ascending FIFO queue order (i.e., must form ordered subsequences of the current queue starting at head). - For the error, capture which input array and index caused the mismatch, or simplify to just emit the expected queue-head value.
| _removePendingRequest(requestId); | ||
| _removeUserPendingRequest(requestId); | ||
| uint256 reqId = _dequeueRequest(); | ||
| if (reqId != requestId) revert RequestProcessOutOfOrder(reqId, requestId); |
There was a problem hiding this comment.
RequestProcessOutOfOrder parameter order is reversed vs. line 967
Here the error is (reqId, requestId) — (dequeued_head, caller_requested) — which reads "expected reqId, got requestId". That's semantically clear.
At line 967, the same error is emitted as (requestId, reqId) — (queue_slot_value, last_tried_input). The second argument there is not the request being "processed"; it's whichever ID was last compared before failing.
The two call sites disagree on which argument is "expected" and which is "processed", making off-chain monitoring and incident investigation harder. Document the parameter semantics in the error definition, or pick a single consistent convention.
| // Remove from pending queues (both global and user-specific) | ||
| _removePendingRequest(requestId); | ||
| _removeUserPendingRequest(requestId); | ||
| _dropQueuedRequest(requestId); |
There was a problem hiding this comment.
Silent skip + missing _dropQueuedRequest call creates queue/state divergence
_dropRequestsInternal only calls _dropQueuedRequest when request.status == RequestStatus.PENDING. If that guard is false, the request remains in _requestsQueue (the pointer-based queue is not updated) while the request's state is unchanged. The caller (startProcessingBatch) has already verified these IDs occupy specific queue slots via the validation loop — so a silent skip would leave a stale entry in the queue, stalling all subsequent dequeues.
In practice, any request in _requestsQueue should always be PENDING (only PENDING requests are enqueued; they exit via dequeue/drop). But given that the validation loop immediately before already confirmed the ID is in the queue, the guard mismatch should be an explicit revert rather than a silent skip, so bugs surface immediately rather than corrupting queue state silently.
| * | ||
| * @param requestId The request ID to remove from the pending requests queue. | ||
| */ | ||
| function _dropQueuedRequest(uint256 requestId) internal { |
There was a problem hiding this comment.
Rejection/cancellation path is still O(n) — batch rejection is O(k×n)
The PR description claims "both enqueue & dequeue operations are now O(1)", which is accurate for the happy path. However _dropQueuedRequest — called once per rejected request in _dropRequestsInternal and also on cancelRequest — is O(queue length) due to the element-shift loop. Processing a batch with k rejections against a queue of length n is therefore O(k×n).
For workloads where a large fraction of requests are rejected (e.g., spam bursts), this can exceed the original O(n) shift cost of the array-based approach. The docstring already acknowledges this ("O(n) operation"), but the PR summary should be updated to avoid overstating the improvement.
| uint256 req4 = c.createYieldVault{value: 4 ether}(NATIVE_FLOW, 4 ether, VAULT_ID, STRATEGY_ID); | ||
| uint256 req5 = c.createYieldVault{value: 5 ether}(NATIVE_FLOW, 5 ether, VAULT_ID, STRATEGY_ID); | ||
| vm.stopPrank(); | ||
|
|
There was a problem hiding this comment.
Missing test: out-of-order arrays should revert
test_StartProcessingBatch_MultipleSuccessfulAndRejectRequests only tests the happy path where both arrays are in FIFO order. There is no test verifying that passing either array out of FIFO order reverts. For example:
// Queue: [req1, req2, req3, req4, req5]
// successfulRequestIds in reverse FIFO order → should revert
uint256[] memory successIds = new uint256[](2);
successIds[0] = req4; // out of order
successIds[1] = req2;
uint256[] memory rejIds = new uint256[](3);
rejIds[0] = req1; rejIds[1] = req3; rejIds[2] = req5;
vm.expectRevert(...RequestProcessOutOfOrder...);
c.startProcessingBatch(successIds, rejIds);Given the ordering constraint is the core correctness invariant of this PR, it warrants an explicit negative test.
Closes #25
FLOW-10: FIFO Queue Is Not Enforced on-Chain yet Costs O(n) to Maintain
Introduce an optimized queue data structure (mapping-based with head & tail pointers), to avoid the high gas costs of maintaining the FIFO order on-chain. Both
enqueue&dequeueoperations are now O(1).The previous queue data structure required an array and a mapping, and had O(n) performance when removing a request from the pending queue:
In additionn
_startProcessingInternalnow has a check which verifies that the request being processed is the head of therequestsQueueFIFO queue:If that is not the case, the function call reverts with
RequestProcessOutOfOrder.The Cadence
Schedulerthat schedules & processes the requests, is fetching the request IDs with:which returns a given count of pending EVM requests from the queue, in FIFO order.
These are fed to
preprocessRequests(), and after the validation checks, they are classified as successful/rejected, and they are then passed in tostartProcessingBatch(), which drops the rejected request IDs, and calls_startProcessingInternal()for each individual successful request ID.