[Access] Add scheduled transactions extended index and endpoints#8468
[Access] Add scheduled transactions extended index and endpoints#8468peterargue wants to merge 21 commits intomasterfrom
Conversation
📝 WalkthroughWalkthroughIntroduces comprehensive scheduled transactions support across storage, indexing, extended backend, and REST API layers. Includes Pebble-backed indices with pagination, event-based indexing with state transitions, backend query APIs with expansion options, and full integration + unit tests. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant REST as REST Handler
participant ExtAPI as extended.API
participant SchedBackend as ScheduledTransactionsBackend
participant Index as ScheduledTransactionsIndexReader/Bootstrapper
participant TxProvider as TransactionsProvider
participant ScriptExec as ScriptExecutor
participant State as protocol.State
Client->>REST: GET /experimental/v1/scheduled (filter/expand)
REST->>ExtAPI: backend.GetScheduledTransactions(limit,cursor,filter,expand,encoding)
ExtAPI->>SchedBackend: GetScheduledTransactions(...)
SchedBackend->>Index: Query index (ByID/All/ByAddress with filter & cursor)
Index-->>SchedBackend: ScheduledTransactionsPage
alt expand includes transaction/result
SchedBackend->>TxProvider: GetTransaction(&txID)
TxProvider-->>SchedBackend: Transaction body/result (if available)
end
alt expand includes handler_contract
SchedBackend->>State: ReadAccountContracts(at latest sealed height)
State-->>SchedBackend: Contract source (if present)
alt not found -> JIT path
SchedBackend->>ScriptExec: ExecuteAtBlockHeight(script, args, height)
ScriptExec-->>SchedBackend: on-chain transaction data
end
end
SchedBackend-->>ExtAPI: ScheduledTransactionsPage (expanded)
ExtAPI-->>REST: result mapped to REST models (with cursor/links)
REST-->>Client: HTTP 200 JSON
Notes: rectangles represent components; sequence shows optional expansion paths (transaction/result, handler contract) including fallback JIT via ScriptExecutor. Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
🧪 Generate unit tests (beta)
|
Dependency Review✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.OpenSSF Scorecard
Scanned Files
|
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Actionable comments posted: 20
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
engine/access/rpc/backend/script_executor.go (1)
81-89:⚠️ Potential issue | 🔴 CriticalAvoid setting
initializedbefore dependencies are assigned.Current ordering can expose partially initialized state to concurrent callers, leading to nil dereference in
checkHeight.Suggested fix (serialize init + publish state last)
import ( "context" "errors" "fmt" + "sync" @@ type ScriptExecutor struct { log zerolog.Logger @@ maxCompatibleHeight *atomic.Uint64 + initMu sync.Mutex } @@ func (s *ScriptExecutor) Initialize( indexReporter indexReporter, scriptExecutor *execution.Scripts, versionControl *version.VersionControl, ) error { - if s.initialized.CompareAndSwap(false, true) { - s.log.Info().Msg("script executor initialized") - s.indexReporter = indexReporter - s.scriptExecutor = scriptExecutor - s.versionControl = versionControl - return nil - } - return fmt.Errorf("script executor already initialized") + if indexReporter == nil || scriptExecutor == nil { + return fmt.Errorf("script executor dependencies must not be nil") + } + + s.initMu.Lock() + defer s.initMu.Unlock() + + if s.initialized.Load() { + return fmt.Errorf("script executor already initialized") + } + + s.indexReporter = indexReporter + s.scriptExecutor = scriptExecutor + s.versionControl = versionControl + s.initialized.Store(true) + s.log.Info().Msg("script executor initialized") + return nil }As per coding guidelines "Use proper synchronization primitives to maintain state consistency, especially in concurrent or distributed contexts".
Also applies to: 204-209
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@engine/access/rpc/backend/script_executor.go` around lines 81 - 89, The initialization currently publishes s.initialized before its dependencies are assigned, risking concurrent access (e.g., in checkHeight); assign s.indexReporter, s.scriptExecutor, and s.versionControl first, then atomically publish the initialized flag (use s.initialized.CompareAndSwap(false, true) or a mutex) so the boolean flips only after all dependencies are set; apply the same change to the other initialization block referenced (lines ~204-209) to ensure consistent publish-last initialization.
🧹 Nitpick comments (6)
storage/account_transfers.go (1)
49-49: Use standard concurrency-safety wording in interface docs.
NOT CONCURRENCY SAFE.reads awkwardly. PreferNOT CONCURRENTLY SAFE.orNot safe for concurrent use.for clearer API guidance (Line 49, Line 136).Also applies to: 136-136
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@storage/account_transfers.go` at line 49, Replace the awkward doc comment "NOT CONCURRENCY SAFE." with clearer standard concurrency wording everywhere it appears (e.g., change to "Not safe for concurrent use." or "NOT CONCURRENTLY SAFE."); search for the exact string "NOT CONCURRENCY SAFE." in this file and update both occurrences (the interface/type doc blocks around the two comments) so the API docs consistently convey that the type/instance must not be used concurrently.storage/account_transactions.go (1)
48-48: Minor: Consider consistency with other concurrency comments.The file uses "safe for concurrent access" on lines 16 and 37, while this now reads "CONCURRENCY SAFE". For consistency, you might prefer "NOT SAFE FOR CONCURRENT ACCESS" to match the phrasing used elsewhere in the file.
Both are acceptable and convey the same meaning.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@storage/account_transactions.go` at line 48, Update the comment "// NOT CONCURRENCY SAFE." in storage/account_transactions.go to match the phrasing used elsewhere (e.g. "NOT SAFE FOR CONCURRENT ACCESS" or "NOT SAFE FOR CONCURRENT USE") so it is consistent with other concurrency comments in this file; locate the comment near the top of the file and replace the text only (no code changes).module/state_synchronization/indexer/extended/test_helpers_test.go (1)
97-104: FixencodeUInt64Argscomment to match actual return shape.The comment says “one per id”, but Line 104 returns a single encoded argument blob wrapped as one
[][]byteentry.Suggested comment fix
-// encodeUInt64Args returns a slice of JSON-CDC encoded UInt64 values, one per id. -// This mirrors the per-ID encoding used by buildArgs when constructing the -// arguments slice for ExecuteAtBlockHeight. +// encodeUInt64Args returns ExecuteAtBlockHeight arguments for getTransactionData. +// The IDs are encoded into a single argument blob and returned as one entry.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@module/state_synchronization/indexer/extended/test_helpers_test.go` around lines 97 - 104, The comment for encodeUInt64Args is inaccurate: it says “one per id” but the function calls EncodeGetTransactionDataArg and returns a single [][]byte containing that single encoded blob; update the comment for encodeUInt64Args to state it returns a slice with one JSON-CDC encoded argument blob (the per-IDs encoded payload produced by EncodeGetTransactionDataArg), and remove or reword “one per id” to accurately describe the actual return shape produced by encodeUInt64Args.cmd/access/node_builder/access_node_builder.go (1)
870-887: RenamescriptExecutorDependendabletoscriptExecutorDependable.This typo is repeated across dependency wiring and makes the startup graph harder to read/grep.
Also applies to: 1048-1048
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/access/node_builder/access_node_builder.go` around lines 870 - 887, The variable name scriptExecutorDependendable is misspelled; rename it to scriptExecutorDependable everywhere it's declared and referenced (e.g., the declaration module.NewProxiedReadyDoneAware(), its use in scriptExecutorDependencies.Add(...), extendedIndexerDependencies.Add(...), and builder.IndexerDependencies.Add(...)) so all dependency wiring uses the consistent identifier scriptExecutorDependable; update any other occurrences (including the duplicate at the later occurrence around the other mentioned line) to match.engine/access/rest/experimental/request/get_scheduled_transactions.go (1)
16-22: Avoid shadowingAddresswith two different types across embedded request structs.
GetScheduledTransactions.Address(*flow.Address) andGetAccountScheduledTransactions.Address(flow.Address) make it easy to read/write the wrong field.Also applies to: 56-59
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@engine/access/rest/experimental/request/get_scheduled_transactions.go` around lines 16 - 22, The Address field is shadowed across request structs—GetScheduledTransactions.Address is declared as *flow.Address while GetAccountScheduledTransactions.Address is flow.Address—so change one to match the other to avoid confusion and bugs: pick a single canonical type (either pointer or value) and update both struct definitions (GetScheduledTransactions and GetAccountScheduledTransactions) and any code that constructs or reads these structs (constructors, handlers, unmarshalling) to use that canonical type consistently; ensure related references like ScheduledTransactionCursor or any code expecting a pointer/value are adjusted accordingly.access/backends/extended/backend_scheduled_transactions.go (1)
301-315: Avoid repeated block-wide scans during transaction expansion.With
expandOptions.Transaction=true, each scheduled transaction can trigger a fullScheduledTransactionsByBlockIDfetch and linear search. This can amplify latency on larger pages. Consider caching block lookups (and txID→tx maps) per request.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@access/backends/extended/backend_scheduled_transactions.go` around lines 301 - 315, The code repeatedly calls ScheduledTransactionsByBlockID and scans allScheduledTxs for each scheduled tx when expandOptions.Transaction is true, causing O(n^2) work; fix by caching per-request block lookups: in the surrounding handler (where expandOptions.Transaction, txID, blockID, and tx.Transaction are used) call b.transactionsProvider.ScheduledTransactionsByBlockID once per blockID, build a map[string]ScheduledTransaction (txID→scheduledTx) and then look up each txID from that map instead of rerunning ScheduledTransactionsByBlockID and linear-searching; store the cache in a local map keyed by blockID (or a request-scoped cache struct) so subsequent expansions reuse it.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@access/backends/extended/backend_scheduled_transactions.go`:
- Line 158: The comment in backend_scheduled_transactions.go incorrectly
references a non-existent field filter.Address on ScheduledTransactionFilter;
update the comment on the ScheduledTransactionFilter usage to remove or replace
the stale reference (e.g., change "When filter.Address is set, results are
scoped to that address; otherwise all are returned." to a neutral description
such as "When the filter includes an address constraint, results are scoped to
that address; otherwise all are returned." or explicitly reference the actual
address field name if one exists in ScheduledTransactionFilter), ensuring the
comment matches the real fields on ScheduledTransactionFilter and does not
mention filter.Address.
In `@engine/access/rest/experimental/models/contract.go`:
- Around line 6-9: The Contract.Build method dereferences the incoming pointer
without checking for nil; modify Contract.Build(contract *accessmodel.Contract)
to guard against a nil input by checking if contract == nil and returning early
(no-op) or setting sensible defaults, ensuring c.Identifier and c.Body are not
accessed when contract is nil; update callers or add a comment documenting the
nil-safe behavior for the Contract.Build method.
In `@engine/access/rest/experimental/models/scheduled_transaction.go`:
- Around line 19-24: The current code silently coerces unknown tx.Status and
tx.Priority values by calling ScheduledTransactionStatus.Build and
ScheduledTransactionPriority.Build and assigning their zero values to
t.Status/t.Priority; instead, update the Build methods (or call sites) to
perform explicit validation and return an error or boolean when the input is
unrecognized, then propagate that error from the surrounding conversion function
rather than assigning empty/default values. Specifically, change
ScheduledTransactionStatus.Build and ScheduledTransactionPriority.Build to
signal invalid inputs (e.g., return error/ok), or check the return/validation at
the call site where tx.Status/tx.Priority are converted and abort/return an
error when validation fails so invalid inputs are not coerced to empty/low.
- Around line 75-80: The code sets a literal placeholder token by assigning
"handler_contract" to t.Expandable.HandlerContract when tx.HandlerContract is
nil; remove that non-resolvable placeholder and instead leave the expandable
field unset (zero value) or set it to an empty string so clients don't receive
an actionable-but-invalid token; update the branch handling tx.HandlerContract
(references: tx.HandlerContract, t.HandlerContract,
t.Expandable.HandlerContract) to omit the placeholder assignment and keep the
expandable entry absent/empty until a real link can be provided.
In `@engine/access/rest/experimental/request/cursor_scheduled_transactions.go`:
- Around line 32-33: The EncodeScheduledTxCursor function dereferences cursor
without a nil check which can panic; update EncodeScheduledTxCursor to validate
that the input cursor != nil and return a typed validation error (rather than
panicking) when nil, then proceed to json.Marshal(scheduledTxCursor{ID:
cursor.ID}) only after the nil check; reference the EncodeScheduledTxCursor
function and scheduledTxCursor construction so the guard is added at the top of
that function and an appropriate error value is returned.
In `@engine/access/rest/experimental/request/get_scheduled_transactions.go`:
- Around line 107-115: The code reading the "status" query param splits on
commas but doesn't trim whitespace, causing ParseScheduledTransactionStatus to
receive tokens like " cancelled"; update the loop that processes rawStatuses
(after raw := r.GetQueryParam("status") and rawStatuses := strings.Split(raw,
",")) to call strings.TrimSpace on each rawStatus before passing it to
accessmodel.ParseScheduledTransactionStatus and before appending to
filter.Statuses so tokens like " cancelled" become "cancelled".
In `@integration/tests/access/cohort3/extended_indexing_scheduled_txs_test.go`:
- Around line 188-205: The pagination helper collectScheduledPages loses account
scoping because it always builds the next page URL using
"/experimental/v1/scheduled"; modify collectScheduledPages so it preserves the
original endpoint path from firstURL (e.g., keep "/accounts/{address}/scheduled"
when present) when constructing the next URL instead of hardcoding
"/experimental/v1/scheduled" — you can extract the path prefix from firstURL (or
use strings.HasPrefix / URL parsing) and reuse that prefix with
fmt.Sprintf("%s%s?limit=%d&cursor=%s", s.restBaseURL, pathPrefix, pageSize,
nextCursor); ensure to import "strings" if using it and keep
fetchScheduledTxJSONBody and restBaseURL unchanged.
In `@model/access/scheduled_transaction.go`:
- Around line 27-33: The String() methods for the enum types (e.g.,
ScheduledTransactionStatus.String) currently panic on unknown values; change
them to return a safe fallback string instead of panicking (for example return
fmt.Sprintf("ScheduledTransactionStatus(%d)", s) or "unknown
ScheduledTransactionStatus" when the map lookup fails). Apply the same
non-panicking fallback to the other enum String() implementation referenced
around lines 68–74 so both methods return a descriptive fallback string rather
than calling panic.
In `@module/execution/scripts.go`:
- Around line 206-210: The doc comment for RegisterAtHeight currently says
missing registers return nil with no error, which conflicts with the
RegisterValue comment that lists storage.ErrNotFound; update the
RegisterAtHeight function comment to state that missing registers return
storage.ErrNotFound (and mention storage.ErrHeightNotIndexed for unindexed
heights) so both RegisterAtHeight and RegisterValue share the same canonical
error contract; ensure the wording and examples align across both comments to
avoid callers handling the wrong branch.
In `@module/metrics/extended_indexing.go`:
- Around line 71-87: The counter update methods ScheduledTransactionIndexed,
FTTransferIndexed, and NFTTransferIndexed currently pass signed ints directly
into Prometheus Counters (scheduledTxCount, scheduledTxBackfillCount,
ftTransferCount, nftTransferCount) which will panic on negative values; modify
each method to validate inputs before calling Add(): if a value is negative, do
not call Add and instead handle explicitly (e.g., emit a warning log including
the function name and offending value or return an error) so negatives are
surfaced rather than silently ignored, and only call counter.Add(float64(x)) for
validated non-negative ints.
In `@module/state_synchronization/indexer/extended/account_ft_transfers.go`:
- Around line 41-47: The constructor for FungibleTokenTransfers sets the metrics
field but code later calls a.metrics.FTTransferIndexed(...) unguarded; add nil
checks before invoking metrics methods (e.g., in FungibleTokenTransfers methods
that call FTTransferIndexed at lines referenced) to avoid panics when metrics is
not injected — either skip the call when a.metrics == nil or provide a no-op
metrics wrapper during construction; reference the FungibleTokenTransfers struct
and the FTTransferIndexed method when applying the guard.
In `@module/state_synchronization/indexer/extended/account_nft_transfers.go`:
- Around line 32-39: The constructor for NonFungibleTokenTransfers (created in
the returned struct) does not guarantee metrics is non-nil, but methods call
a.metrics.NFTTransferIndexed(...) unguarded; add a nil-safety: either
require/validate metrics at construction or wrap calls with a nil-check (e.g.,
if a.metrics != nil { a.metrics.NFTTransferIndexed(...) }) or inject a no-op
implementation that implements ExtendedIndexingMetrics. Update all usages of
a.metrics.NFTTransferIndexed (and similar metrics calls) in
NonFungibleTokenTransfers methods to use the chosen nil-safe pattern so a nil
metrics dependency cannot cause a panic.
In `@module/state_synchronization/indexer/extended/scheduled_transaction_data.go`:
- Around line 127-133: The ScheduledTransaction Timestamp and Fees fields are
being cast directly from cadence.UFix64 (raw.Timestamp, raw.Fees) to uint64
which yields the internal fixed-point value (scaled by 1e8); parse the UFix64
string representation (e.g., call ToString/ToGoValue on raw.Timestamp/raw.Fees
or otherwise obtain the decimal string), convert it to a numeric type, divide by
100000000 (1e8) to normalize the fixed-point scaling, then cast that normalized
integer to uint64 before assigning to ScheduledTransaction.Timestamp and
ScheduledTransaction.Fees in the constructor that returns
access.ScheduledTransaction.
In
`@module/state_synchronization/indexer/extended/scheduled_transaction_requester.go`:
- Around line 45-50: The Fetch method on ScheduledTransactionRequester
dereferences fields on the scheduledTransactionData pointer (executedEntries,
canceledEntries, failedEntries) without checking for nil; add a nil guard at the
start of Fetch to handle a nil data argument (or initialize data to an empty
scheduledTransactionData) and ensure subsequent code uses the safe non-nil
value, so accesses to data.executedEntries, data.canceledEntries and
data.failedEntries cannot panic; update any early returns or default behavior
accordingly in Fetch.
- Around line 101-135: The code must validate the returned array cardinality and
ID consistency before indexing batch or inserting into missingTxs: after
decoding results in ExecuteAtBlockHeight (in the loop over slices.Chunk and
variables batch, array, results), check that len(array.Values) == len(batch) and
return an explicit error if not; avoid using batch[i] unless i < len(batch);
after decodeTransactionData, verify decoded.ID matches the expected lookup ID
from batch (or otherwise detect/report mismatches) and fail on duplicate
decoded.IDs instead of silently overwriting missingTxs; update error messages in
this block (around EncodeGetTransactionDataArg, ExecuteAtBlockHeight,
decodeTransactionData) to include these checks and return clear errors when
cardinality, index safety, ID mismatch, or duplicate IDs are detected.
In `@module/state_synchronization/indexer/extended/scheduled_transactions.go`:
- Around line 349-350: Check and validate pendingEventTxIndex (derived from
event.TransactionIndex) before using it to slice data.Transactions in the loop;
if pendingEventTxIndex is negative or >= len(data.Transactions) return a
descriptive error instead of slicing to avoid a panic. Locate the loop that
iterates over data.Transactions[*pendingEventTxIndex:] and add a guard that
verifies the index bounds, returning an error (with context about the event and
index) when out of range; keep the existing s.isExecutorTransaction(tx) logic
unchanged after the guard.
- Around line 218-224: The ScheduledTransactionIndexed call can receive a
negative first argument because it uses len(collected.newTxs)-len(missingIDs);
change this to compute the actual number of scheduled/new transactions (e.g.,
count newTxs that are not in missingIDs or use max(0,
len(collected.newTxs)-len(missingIDs))) before calling
metrics.ScheduledTransactionIndexed so the metric never goes below zero; update
the call site (metrics.ScheduledTransactionIndexed(...)) to pass that
non-negative computed value instead of the raw subtraction, referencing
collected.newTxs and missingIDs.
In `@storage/indexes/scheduled_transactions_bootstrapper.go`:
- Around line 158-164: The CAS failure in the bootstrap path (the
b.store.CompareAndSwap(nil, store) branch inside the scheduled transactions
bootstrapper) represents an irrecoverable invariant violation and must not be
returned as a normal error; replace the fmt.Errorf return with an exception
raised via the irrecoverable package (use the project's irrecoverable API to
create/raise an unrecoverable error with a clear message including context that
scheduled transactions were initialized during bootstrap), so that callers
cannot treat this state-corruption path as recoverable.
In `@storage/indexes/scheduled_transactions.go`:
- Around line 158-162: Update the misleading doc comments for the range-key
helper functions so they no longer state “Any error indicates the cursor is
invalid” (these functions do not return errors). For example, revise the comment
above func (idx *ScheduledTransactionsIndex) rangeKeysAll(cursor
*access.ScheduledTransactionCursor) (startKey, endKey []byte) and the similar
comment above rangeKeysForAccount to simply describe what the function computes
(the start and end keys for iterating scheduled transactions based on the
provided cursor) and note that invalid cursor values will affect the returned
keys rather than producing an error.
- Around line 222-235: The code only checks persistent storage via
operation.KeyExists(rw.GlobalReader(), primaryKey) but must also detect
duplicate tx.IDs inside the incoming batch (scheduledTxs) to avoid overwriting
primary state and leaving by-address index inconsistent; before calling
operation.UpsertByKey for each tx, iterate the batch or maintain a local
map[uint64]struct{} (or similar) and error out if a tx.ID is already seen,
returning a clear error (e.g., fmt.Errorf("duplicate scheduled transaction ID in
batch: %d", tx.ID)); keep the existing persistent-key checks (primaryKey,
makeScheduledTxByAddrKey) but add this in-memory duplicate guard early in the
processing loop that calls operation.KeyExists and operation.UpsertByKey.
---
Outside diff comments:
In `@engine/access/rpc/backend/script_executor.go`:
- Around line 81-89: The initialization currently publishes s.initialized before
its dependencies are assigned, risking concurrent access (e.g., in checkHeight);
assign s.indexReporter, s.scriptExecutor, and s.versionControl first, then
atomically publish the initialized flag (use s.initialized.CompareAndSwap(false,
true) or a mutex) so the boolean flips only after all dependencies are set;
apply the same change to the other initialization block referenced (lines
~204-209) to ensure consistent publish-last initialization.
---
Nitpick comments:
In `@access/backends/extended/backend_scheduled_transactions.go`:
- Around line 301-315: The code repeatedly calls ScheduledTransactionsByBlockID
and scans allScheduledTxs for each scheduled tx when expandOptions.Transaction
is true, causing O(n^2) work; fix by caching per-request block lookups: in the
surrounding handler (where expandOptions.Transaction, txID, blockID, and
tx.Transaction are used) call
b.transactionsProvider.ScheduledTransactionsByBlockID once per blockID, build a
map[string]ScheduledTransaction (txID→scheduledTx) and then look up each txID
from that map instead of rerunning ScheduledTransactionsByBlockID and
linear-searching; store the cache in a local map keyed by blockID (or a
request-scoped cache struct) so subsequent expansions reuse it.
In `@cmd/access/node_builder/access_node_builder.go`:
- Around line 870-887: The variable name scriptExecutorDependendable is
misspelled; rename it to scriptExecutorDependable everywhere it's declared and
referenced (e.g., the declaration module.NewProxiedReadyDoneAware(), its use in
scriptExecutorDependencies.Add(...), extendedIndexerDependencies.Add(...), and
builder.IndexerDependencies.Add(...)) so all dependency wiring uses the
consistent identifier scriptExecutorDependable; update any other occurrences
(including the duplicate at the later occurrence around the other mentioned
line) to match.
In `@engine/access/rest/experimental/request/get_scheduled_transactions.go`:
- Around line 16-22: The Address field is shadowed across request
structs—GetScheduledTransactions.Address is declared as *flow.Address while
GetAccountScheduledTransactions.Address is flow.Address—so change one to match
the other to avoid confusion and bugs: pick a single canonical type (either
pointer or value) and update both struct definitions (GetScheduledTransactions
and GetAccountScheduledTransactions) and any code that constructs or reads these
structs (constructors, handlers, unmarshalling) to use that canonical type
consistently; ensure related references like ScheduledTransactionCursor or any
code expecting a pointer/value are adjusted accordingly.
In `@module/state_synchronization/indexer/extended/test_helpers_test.go`:
- Around line 97-104: The comment for encodeUInt64Args is inaccurate: it says
“one per id” but the function calls EncodeGetTransactionDataArg and returns a
single [][]byte containing that single encoded blob; update the comment for
encodeUInt64Args to state it returns a slice with one JSON-CDC encoded argument
blob (the per-IDs encoded payload produced by EncodeGetTransactionDataArg), and
remove or reword “one per id” to accurately describe the actual return shape
produced by encodeUInt64Args.
In `@storage/account_transactions.go`:
- Line 48: Update the comment "// NOT CONCURRENCY SAFE." in
storage/account_transactions.go to match the phrasing used elsewhere (e.g. "NOT
SAFE FOR CONCURRENT ACCESS" or "NOT SAFE FOR CONCURRENT USE") so it is
consistent with other concurrency comments in this file; locate the comment near
the top of the file and replace the text only (no code changes).
In `@storage/account_transfers.go`:
- Line 49: Replace the awkward doc comment "NOT CONCURRENCY SAFE." with clearer
standard concurrency wording everywhere it appears (e.g., change to "Not safe
for concurrent use." or "NOT CONCURRENTLY SAFE."); search for the exact string
"NOT CONCURRENCY SAFE." in this file and update both occurrences (the
interface/type doc blocks around the two comments) so the API docs consistently
convey that the type/instance must not be used concurrently.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
go.sumis excluded by!**/*.sumintegration/go.sumis excluded by!**/*.sum
📒 Files selected for processing (69)
access/backends/extended/api.goaccess/backends/extended/backend.goaccess/backends/extended/backend_base.goaccess/backends/extended/backend_scheduled_transactions.goaccess/backends/extended/backend_scheduled_transactions_test.goaccess/backends/extended/mock/api.gocmd/access/node_builder/access_node_builder.gocmd/observer/node_builder/observer_builder.goengine/access/rest/experimental/models/contract.goengine/access/rest/experimental/models/model_contract.goengine/access/rest/experimental/models/model_scheduled_transaction.goengine/access/rest/experimental/models/model_scheduled_transaction__expandable.goengine/access/rest/experimental/models/model_scheduled_transaction_priority.goengine/access/rest/experimental/models/model_scheduled_transaction_status.goengine/access/rest/experimental/models/model_scheduled_transactions_response.goengine/access/rest/experimental/models/scheduled_transaction.goengine/access/rest/experimental/request/cursor_scheduled_transactions.goengine/access/rest/experimental/request/get_scheduled_transactions.goengine/access/rest/experimental/routes/scheduled_transactions.goengine/access/rest/experimental/routes/scheduled_transactions_test.goengine/access/rest/router/routes_experimental.goengine/access/rpc/backend/script_executor.goengine/access/rpc/backend/script_executor_test.gogo.modintegration/go.modintegration/tests/access/cohort3/extended_indexing_scheduled_txs_test.gointegration/tests/access/cohort3/extended_indexing_test.gomodel/access/contract.gomodel/access/scheduled_transaction.gomodule/execution/scripts.gomodule/execution/scripts_test.gomodule/metrics.gomodule/metrics/extended_indexing.gomodule/metrics/noop.gomodule/mock/extended_indexing_metrics.gomodule/state_synchronization/indexer/extended/account_ft_transfers.gomodule/state_synchronization/indexer/extended/account_ft_transfers_test.gomodule/state_synchronization/indexer/extended/account_nft_transfers.gomodule/state_synchronization/indexer/extended/account_nft_transfers_test.gomodule/state_synchronization/indexer/extended/bootstrap.gomodule/state_synchronization/indexer/extended/bootstrap/bootstrap.gomodule/state_synchronization/indexer/extended/events/helpers.gomodule/state_synchronization/indexer/extended/events/helpers_test.gomodule/state_synchronization/indexer/extended/events/scheduled_transaction.gomodule/state_synchronization/indexer/extended/mock/script_executor.gomodule/state_synchronization/indexer/extended/scheduled_transaction_data.gomodule/state_synchronization/indexer/extended/scheduled_transaction_data_test.gomodule/state_synchronization/indexer/extended/scheduled_transaction_requester.gomodule/state_synchronization/indexer/extended/scheduled_transaction_requester_test.gomodule/state_synchronization/indexer/extended/scheduled_transactions.gomodule/state_synchronization/indexer/extended/scheduled_transactions_test.gomodule/state_synchronization/indexer/extended/test_helpers_test.gomodule/state_synchronization/indexer/indexer_core.gomodule/state_synchronization/indexer/indexer_core_test.gostorage/account_transactions.gostorage/account_transfers.gostorage/errors.gostorage/indexes/prefix.gostorage/indexes/scheduled_transactions.gostorage/indexes/scheduled_transactions_bootstrapper.gostorage/indexes/scheduled_transactions_bootstrapper_test.gostorage/indexes/scheduled_transactions_test.gostorage/locks.gostorage/mock/scheduled_transactions_index.gostorage/mock/scheduled_transactions_index_bootstrapper.gostorage/mock/scheduled_transactions_index_range_reader.gostorage/mock/scheduled_transactions_index_reader.gostorage/mock/scheduled_transactions_index_writer.gostorage/scheduled_transactions_index.go
💤 Files with no reviewable changes (3)
- module/state_synchronization/indexer/indexer_core.go
- module/state_synchronization/indexer/indexer_core_test.go
- module/state_synchronization/indexer/extended/bootstrap.go
| func (c *Contract) Build(contract *accessmodel.Contract) { | ||
| c.Identifier = contract.Identifier | ||
| c.Body = contract.Body | ||
| } |
There was a problem hiding this comment.
Guard nil input in Contract.Build to prevent panic.
Line 7 and Line 8 dereference contract unconditionally. If a nil slips through, this panics in the REST model build path.
Proposed fix
func (c *Contract) Build(contract *accessmodel.Contract) {
+ if contract == nil {
+ c.Identifier = ""
+ c.Body = ""
+ return
+ }
c.Identifier = contract.Identifier
c.Body = contract.Body
}As per coding guidelines, "Treat all inputs as potentially byzantine and classify errors in a context-dependent manner; no code path is safe unless explicitly proven and documented."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (c *Contract) Build(contract *accessmodel.Contract) { | |
| c.Identifier = contract.Identifier | |
| c.Body = contract.Body | |
| } | |
| func (c *Contract) Build(contract *accessmodel.Contract) { | |
| if contract == nil { | |
| c.Identifier = "" | |
| c.Body = "" | |
| return | |
| } | |
| c.Identifier = contract.Identifier | |
| c.Body = contract.Body | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@engine/access/rest/experimental/models/contract.go` around lines 6 - 9, The
Contract.Build method dereferences the incoming pointer without checking for
nil; modify Contract.Build(contract *accessmodel.Contract) to guard against a
nil input by checking if contract == nil and returning early (no-op) or setting
sensible defaults, ensuring c.Identifier and c.Body are not accessed when
contract is nil; update callers or add a comment documenting the nil-safe
behavior for the Contract.Build method.
| if tx.HandlerContract != nil { | ||
| t.HandlerContract = new(Contract) | ||
| t.HandlerContract.Build(tx.HandlerContract) | ||
| } else { | ||
| t.Expandable.HandlerContract = "handler_contract" // TODO: this will be implemented in the next PR | ||
| } |
There was a problem hiding this comment.
Avoid emitting a non-resolvable _expandable.handler_contract placeholder.
Line 79 sets "handler_contract" as a literal token, not a usable link. This can break clients that treat _expandable entries as actionable relations.
💡 Proposed fix
if tx.HandlerContract != nil {
t.HandlerContract = new(Contract)
t.HandlerContract.Build(tx.HandlerContract)
- } else {
- t.Expandable.HandlerContract = "handler_contract" // TODO: this will be implemented in the next PR
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if tx.HandlerContract != nil { | |
| t.HandlerContract = new(Contract) | |
| t.HandlerContract.Build(tx.HandlerContract) | |
| } else { | |
| t.Expandable.HandlerContract = "handler_contract" // TODO: this will be implemented in the next PR | |
| } | |
| if tx.HandlerContract != nil { | |
| t.HandlerContract = new(Contract) | |
| t.HandlerContract.Build(tx.HandlerContract) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@engine/access/rest/experimental/models/scheduled_transaction.go` around lines
75 - 80, The code sets a literal placeholder token by assigning
"handler_contract" to t.Expandable.HandlerContract when tx.HandlerContract is
nil; remove that non-resolvable placeholder and instead leave the expandable
field unset (zero value) or set it to an empty string so clients don't receive
an actionable-but-invalid token; update the branch handling tx.HandlerContract
(references: tx.HandlerContract, t.HandlerContract,
t.Expandable.HandlerContract) to omit the placeholder assignment and keep the
expandable entry absent/empty until a real link can be provided.
| func EncodeScheduledTxCursor(cursor *accessmodel.ScheduledTransactionCursor) (string, error) { | ||
| data, err := json.Marshal(scheduledTxCursor{ID: cursor.ID}) |
There was a problem hiding this comment.
Guard nil cursor to prevent panic.
Line 33 dereferences cursor unconditionally. A nil input will panic instead of returning a typed validation error.
💡 Proposed fix
func EncodeScheduledTxCursor(cursor *accessmodel.ScheduledTransactionCursor) (string, error) {
+ if cursor == nil {
+ return "", fmt.Errorf("invalid cursor: nil")
+ }
data, err := json.Marshal(scheduledTxCursor{ID: cursor.ID})
if err != nil {
return "", fmt.Errorf("failed to marshal cursor: %w", err)
}
return base64.RawURLEncoding.EncodeToString(data), nil
}As per coding guidelines, “Treat all inputs as potentially byzantine and classify errors in a context-dependent manner; no code path is safe unless explicitly proven and documented.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@engine/access/rest/experimental/request/cursor_scheduled_transactions.go`
around lines 32 - 33, The EncodeScheduledTxCursor function dereferences cursor
without a nil check which can panic; update EncodeScheduledTxCursor to validate
that the input cursor != nil and return a typed validation error (rather than
panicking) when nil, then proceed to json.Marshal(scheduledTxCursor{ID:
cursor.ID}) only after the nil check; reference the EncodeScheduledTxCursor
function and scheduledTxCursor construction so the guard is added at the top of
that function and an appropriate error value is returned.
| s.metrics.ScheduledTransactionIndexed( | ||
| len(collected.newTxs)-len(missingIDs), | ||
| len(collected.executedEntries), | ||
| len(collected.failedEntries), | ||
| len(collected.canceledEntries), | ||
| len(missingIDs), | ||
| ) |
There was a problem hiding this comment.
scheduled metric can go negative in backfill scenarios.
Using len(collected.newTxs)-len(missingIDs) undercounts and may produce negative values when a block has only backfilled IDs.
💡 Proposed fix
newTxs := collected.newTxs
+ backfilledCount := 0
if len(missingIDs) > 0 {
@@
missingTxs, err := s.requester.Fetch(context.TODO(), missingIDs, data.Header.Height-1, collected)
if err != nil {
return fmt.Errorf("failed to fetch scheduled transaction data from state: %w", err)
}
+ backfilledCount = len(missingTxs)
newTxs = append(newTxs, missingTxs...)
}
@@
s.metrics.ScheduledTransactionIndexed(
- len(collected.newTxs)-len(missingIDs),
+ len(collected.newTxs),
len(collected.executedEntries),
len(collected.failedEntries),
len(collected.canceledEntries),
- len(missingIDs),
+ backfilledCount,
)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@module/state_synchronization/indexer/extended/scheduled_transactions.go`
around lines 218 - 224, The ScheduledTransactionIndexed call can receive a
negative first argument because it uses len(collected.newTxs)-len(missingIDs);
change this to compute the actual number of scheduled/new transactions (e.g.,
count newTxs that are not in missingIDs or use max(0,
len(collected.newTxs)-len(missingIDs))) before calling
metrics.ScheduledTransactionIndexed so the metric never goes below zero; update
the call site (metrics.ScheduledTransactionIndexed(...)) to pass that
non-negative computed value instead of the raw subtraction, referencing
collected.newTxs and missingIDs.
| for _, tx := range data.Transactions[*pendingEventTxIndex:] { | ||
| if !s.isExecutorTransaction(tx) { |
There was a problem hiding this comment.
Validate pendingEventTxIndex before slicing transactions.
Line 349 can panic if event.TransactionIndex is out of bounds. This should return an error, not crash the indexer.
💡 Proposed fix
- for _, tx := range data.Transactions[*pendingEventTxIndex:] {
+ start := int(*pendingEventTxIndex)
+ if start >= len(data.Transactions) {
+ return nil, fmt.Errorf(
+ "PendingExecution transaction index %d out of bounds for block %d (tx count %d)",
+ start, data.Header.Height, len(data.Transactions),
+ )
+ }
+ for _, tx := range data.Transactions[start:] {
if !s.isExecutorTransaction(tx) {
continue
}As per coding guidelines "Treat all inputs as potentially byzantine and classify errors in a context-dependent manner; no code path is safe unless explicitly proven and documented."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@module/state_synchronization/indexer/extended/scheduled_transactions.go`
around lines 349 - 350, Check and validate pendingEventTxIndex (derived from
event.TransactionIndex) before using it to slice data.Transactions in the loop;
if pendingEventTxIndex is negative or >= len(data.Transactions) return a
descriptive error instead of slicing to avoid a panic. Locate the loop that
iterates over data.Transactions[*pendingEventTxIndex:] and add a guard that
verifies the index bounds, returning an error (with context about the event and
index) when out of range; keep the existing s.isExecutorTransaction(tx) logic
unchanged after the guard.
| if !b.store.CompareAndSwap(nil, store) { | ||
| // this should never happen. if it does, there is a bug. this indicates another goroutine | ||
| // successfully initialized `store` since we checked the value above. since the bootstrap | ||
| // operation is protected by the lock and it performs sanity checks to ensure the table | ||
| // is actually empty, the bootstrap operation should fail if there was concurrent access. | ||
| return fmt.Errorf("scheduled transactions initialized during bootstrap") | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Classify CAS invariant failure as irrecoverable, not a regular error.
Line [159] explicitly documents an impossible state, but Line [163] returns a normal fmt.Errorf. This should be raised as an exception to avoid treating state-corruption paths as recoverable.
Proposed fix
import (
"errors"
"fmt"
"github.com/jordanschalm/lockctx"
"go.uber.org/atomic"
"github.com/onflow/flow-go/model/access"
"github.com/onflow/flow-go/model/flow"
+ "github.com/onflow/flow-go/module/irrecoverable"
"github.com/onflow/flow-go/storage"
)
@@
if !b.store.CompareAndSwap(nil, store) {
@@
- return fmt.Errorf("scheduled transactions initialized during bootstrap")
+ return irrecoverable.NewExceptionf("scheduled transactions initialized during bootstrap")
}As per coding guidelines, "Use the irrecoverable package for exception handling instead of fmt.Errorf; always explicitly handle errors and never log and continue on a best-effort basis".
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if !b.store.CompareAndSwap(nil, store) { | |
| // this should never happen. if it does, there is a bug. this indicates another goroutine | |
| // successfully initialized `store` since we checked the value above. since the bootstrap | |
| // operation is protected by the lock and it performs sanity checks to ensure the table | |
| // is actually empty, the bootstrap operation should fail if there was concurrent access. | |
| return fmt.Errorf("scheduled transactions initialized during bootstrap") | |
| } | |
| if !b.store.CompareAndSwap(nil, store) { | |
| // this should never happen. if it does, there is a bug. this indicates another goroutine | |
| // successfully initialized `store` since we checked the value above. since the bootstrap | |
| // operation is protected by the lock and it performs sanity checks to ensure the table | |
| // is actually empty, the bootstrap operation should fail if there was concurrent access. | |
| return irrecoverable.NewExceptionf("scheduled transactions initialized during bootstrap") | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@storage/indexes/scheduled_transactions_bootstrapper.go` around lines 158 -
164, The CAS failure in the bootstrap path (the b.store.CompareAndSwap(nil,
store) branch inside the scheduled transactions bootstrapper) represents an
irrecoverable invariant violation and must not be returned as a normal error;
replace the fmt.Errorf return with an exception raised via the irrecoverable
package (use the project's irrecoverable API to create/raise an unrecoverable
error with a clear message including context that scheduled transactions were
initialized during bootstrap), so that callers cannot treat this
state-corruption path as recoverable.
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (4)
engine/access/rest/experimental/models/scheduled_transaction.go (1)
82-87:⚠️ Potential issue | 🟠 MajorRemove non-resolvable
_expandable.handler_contractplaceholder.Setting
"handler_contract"as a literal token exposes an actionable-looking relation that cannot be resolved by clients.💡 Proposed fix
if tx.HandlerContract != nil { t.HandlerContract = new(Contract) t.HandlerContract.Build(tx.HandlerContract) - } else { - t.Expandable.HandlerContract = "handler_contract" // TODO: this will be implemented in the next PR }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@engine/access/rest/experimental/models/scheduled_transaction.go` around lines 82 - 87, The else branch currently assigns a non-resolvable token to t.Expandable.HandlerContract; remove that assignment so no placeholder relation is exposed — in scheduled_transaction.go, inside the block that handles tx.HandlerContract, keep the t.HandlerContract = new(Contract) and t.HandlerContract.Build(tx.HandlerContract) path, and delete the else branch (or leave it empty) so t.Expandable.HandlerContract is not set to the literal "handler_contract"; ensure no other code expects that placeholder.module/state_synchronization/indexer/extended/scheduled_transaction_requester.go (2)
127-140:⚠️ Potential issue | 🟠 MajorValidate decoded ID consistency and reject duplicate lookup IDs.
After decode, the code stores by
decoded.ID(Line [139]) without checking it matchesbatch[i]. A mismatched or duplicate decoded ID can silently overwrite map entries and return incomplete/wrong data.As per coding guidelines “Treat all inputs as potentially byzantine and classify errors in a context-dependent manner; no code path is safe unless explicitly proven and documented”.
💡 Proposed fix
for i, result := range array.Values { opt, ok := result.(cadence.Optional) if !ok { return nil, fmt.Errorf("expected Optional at index %d, got %T", i, result) } if opt.Value == nil { return nil, fmt.Errorf("scheduled transaction %d had event, but is not found on-chain", batch[i]) } decoded, err := decodeTransactionData(opt.Value) if err != nil { return nil, fmt.Errorf("failed to decode scheduled transaction %d: %w", batch[i], err) } - missingTxs[decoded.ID] = decoded + expectedID := batch[i] + if decoded.ID != expectedID { + return nil, fmt.Errorf("scheduled transaction ID mismatch at index %d: expected %d, got %d", i, expectedID, decoded.ID) + } + if _, exists := missingTxs[expectedID]; exists { + return nil, fmt.Errorf("duplicate scheduled transaction ID in lookup results: %d", expectedID) + } + missingTxs[expectedID] = decoded }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@module/state_synchronization/indexer/extended/scheduled_transaction_requester.go` around lines 127 - 140, After decoding each optional value in the loop, validate that decoded.ID equals the expected lookup ID batch[i] and reject any discrepancy by returning an error; also detect and reject duplicates by checking if decoded.ID is already present in missingTxs before inserting (return an error indicating a duplicate lookup ID if found). Update the loop that uses array.Values, decodeTransactionData, batch, and missingTxs to perform these two checks and fail fast with clear error messages referencing the index i and the offending ID.
45-50:⚠️ Potential issue | 🔴 CriticalGuard
databefore dereferencing event slices.Line [57], Line [66], and Line [77] dereference
dataunconditionally. If a caller passesnil, this panics the process.💡 Proposed fix
func (r *ScheduledTransactionRequester) Fetch( ctx context.Context, lookupIDs []uint64, lookupHeight uint64, data *scheduledTransactionData, ) ([]access.ScheduledTransaction, error) { + if data == nil { + return nil, fmt.Errorf("scheduled transaction data is required") + } + missingTxs, err := r.fetchMissingTxs(ctx, lookupIDs, lookupHeight) if err != nil { return nil, fmt.Errorf("failed to fetch missing scheduled transactions: %w", err) }Also applies to: 57-85
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@module/state_synchronization/indexer/extended/scheduled_transaction_requester.go` around lines 45 - 50, The Fetch method on ScheduledTransactionRequester currently dereferences fields on the pointer parameter scheduledTransactionData without checking for nil; add a nil guard at the start of ScheduledTransactionRequester.Fetch to handle a nil data pointer (e.g., if data == nil) and return a descriptive error (or initialize an empty scheduledTransactionData if semantics require) before any access to its event slice fields so lines that access data.<event...> cannot panic; update callers/tests as needed to reflect the chosen behavior.storage/indexes/scheduled_transactions.go (1)
213-231:⚠️ Potential issue | 🟠 MajorGuard duplicate scheduled transaction IDs within the same batch payload.
Line 218only checks persisted state viarw.GlobalReader(). Duplicatetx.IDvalues inside the currentscheduledTxsslice can still slip through, overwrite primary state, and leave stale/inconsistent by-address entries.Proposed fix
func storeAllScheduledTransactions(rw storage.ReaderBatchWriter, scheduledTxs []access.ScheduledTransaction) error { writer := rw.Writer() + seenInBatch := make(map[uint64]struct{}, len(scheduledTxs)) for _, tx := range scheduledTxs { + if _, seen := seenInBatch[tx.ID]; seen { + return fmt.Errorf("duplicate scheduled transaction %d in batch: %w", tx.ID, storage.ErrAlreadyExists) + } + seenInBatch[tx.ID] = struct{}{} + primaryKey := makeScheduledTxPrimaryKey(tx.ID) exists, err := operation.KeyExists(rw.GlobalReader(), primaryKey) if err != nil { return fmt.Errorf("could not check key for tx %d: %w", tx.ID, err) }As per coding guidelines, "Treat all inputs as potentially byzantine and classify errors in a context-dependent manner; no code path is safe unless explicitly proven and documented".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@storage/indexes/scheduled_transactions.go` around lines 213 - 231, storeAllScheduledTransactions currently only checks persistence via operation.KeyExists(rw.GlobalReader(), primaryKey) and misses duplicate tx.IDs inside the scheduledTxs slice; update storeAllScheduledTransactions to detect duplicates within the incoming batch by tracking seen IDs (e.g., use a local map[int64]bool or map[access.ScheduledTransactionID]struct{}) before calling operation.UpsertByKey, return a clear error when a duplicate ID is found, and ensure you still perform the existing KeyExists check for persisted duplicates; reference makeScheduledTxPrimaryKey, scheduledTxs, operation.KeyExists, and makeScheduledTxByAddrKey when locating where to add the in-memory duplicate guard.
🧹 Nitpick comments (1)
storage/indexes/scheduled_transactions.go (1)
243-344: Consider consolidating transition flow to reduce drift risk.
Executed,Cancelled, andFailedrepeat the same lock-check/load/status-guard/upsert pattern. Extracting a shared helper (with a mutation callback) would reduce maintenance risk and keep transition semantics consistent.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@storage/indexes/scheduled_transactions.go` around lines 243 - 344, The three methods Executed, Cancelled, and Failed duplicate the same lock check, RetrieveByKey, status guard, and UpsertByKey pattern; refactor by adding a helper on ScheduledTransactionsIndex (e.g., updateScheduledTx) that accepts lctx, rw, scheduledTxID and a mutation callback func(*access.ScheduledTransaction) error, which performs the lock verification (storage.LockIndexScheduledTransactionsIndex), loads the record (operation.RetrieveByKey), enforces the Scheduled->terminal guard, calls the callback to modify fields (set Status, ExecutedTransactionID/CancelledTransactionID, FeesReturned/Deducted, etc.), then UpsertByKey and returns errors; update Executed, Cancelled, and Failed to call this helper with small inline callbacks that only set the specific fields.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@access/backends/extended/backend_scheduled_transactions.go`:
- Around line 258-261: Update the doc comment for the expand function in
backend_scheduled_transactions.go to reflect its real behavior: state that
expand enriches executed scheduled transactions with their transaction result,
also performs handler-contract expansion (i.e., resolves/attaches handler and
contract info), and includes failed scheduled transactions in its processing;
also adjust or remove the sentence "No error returns are expected during normal
operation" to accurately reflect whether expand can return errors. Reference the
expand function name and ensure the comment describes both result enrichment and
handler/contract enrichment and handling of failed transactions.
In `@engine/access/rest/experimental/models/scheduled_transaction.go`:
- Around line 13-18: The Build method for ScheduledTransaction dereferences tx
(tx.ID) without a nil guard; add an explicit nil check at the top of
ScheduledTransaction.Build (check if tx == nil) and return a
classified/meaningful error (e.g., invalid input/BadRequest error) instead of
letting it panic; ensure you perform this guard before using tx.ID and any other
tx fields so all dereferences in Build are safe.
In `@engine/access/rest/experimental/request/get_scheduled_transactions.go`:
- Around line 55-56: The comment above the struct is out of sync: update the
type comment to match the declared struct name GetAccountScheduledTransactions
(or rename the struct if intended to be GetScheduledTransactions); specifically
edit the comment that currently reads "GetScheduledTransactions holds parsed
request params for the list endpoints." so it references
GetAccountScheduledTransactions and accurately describes that struct.
- Around line 126-140: After parsing start_time and end_time into
filter.StartTime and filter.EndTime, add a validation step that, when both
filter.StartTime and filter.EndTime are non-nil, compares their uint64 values
and returns a 4xx parse error if *filter.StartTime > *filter.EndTime (e.g.,
return a descriptive error like "invalid time window: start_time > end_time");
update the function that calls r.GetQueryParam / strconv.ParseUint (the block
that sets filter.StartTime and filter.EndTime) to perform this check immediately
after both fields can be present so the request is rejected with a client-side
parse error when the window is invalid.
- Around line 30-36: The parsing block for limit should validate and cap values
to an explicit safe range: after parsing with strconv.ParseUint (the existing
code in the r.GetQueryParam("limit") branch), check that parsed >= 1 and then
cap it at a defined constant (e.g., MaxScheduledTransactionsLimit or MAX_LIMIT)
before assigning to req.Limit; return a clear error if parsed == 0 (invalid page
size) and otherwise do req.Limit = uint32(min(parsed,
MaxScheduledTransactionsLimit)). Add the new MaxScheduledTransactionsLimit
constant nearby and use it in the validation to ensure backend never receives
out-of-range values.
In `@integration/tests/access/cohort3/extended_indexing_scheduled_txs_test.go`:
- Around line 201-202: The test uses bare http.Get inside fetchScheduledTxJSON
(called by fetchScheduledTxJSONBody within require.Eventually) which can block
forever; create a package-level http.Client with a non-zero Timeout and replace
the http.Get call in fetchScheduledTxJSON with client.Get (or use client.Do) so
each poll has a bounded timeout; ensure the new client is reused across calls
(module-level variable) and propagate/context any errors unchanged.
- Line 194: The polling loop builds the request URL unsafely and uses a bare
http.Get that can hang; change the URL construction to escape the cursor (use
net/url.QueryEscape on nextCursor when building url from firstURL and
nextCursor) and replace the blocking http.Get call with an HTTP request that
respects timeouts (either create an http.Client with a Timeout or perform the
request with a context with timeout) so the polling cannot hang indefinitely;
update the code paths around the variables firstURL, nextCursor and the call
site of http.Get to use the escaped cursor and the timeout-aware client/context.
---
Duplicate comments:
In `@engine/access/rest/experimental/models/scheduled_transaction.go`:
- Around line 82-87: The else branch currently assigns a non-resolvable token to
t.Expandable.HandlerContract; remove that assignment so no placeholder relation
is exposed — in scheduled_transaction.go, inside the block that handles
tx.HandlerContract, keep the t.HandlerContract = new(Contract) and
t.HandlerContract.Build(tx.HandlerContract) path, and delete the else branch (or
leave it empty) so t.Expandable.HandlerContract is not set to the literal
"handler_contract"; ensure no other code expects that placeholder.
In
`@module/state_synchronization/indexer/extended/scheduled_transaction_requester.go`:
- Around line 127-140: After decoding each optional value in the loop, validate
that decoded.ID equals the expected lookup ID batch[i] and reject any
discrepancy by returning an error; also detect and reject duplicates by checking
if decoded.ID is already present in missingTxs before inserting (return an error
indicating a duplicate lookup ID if found). Update the loop that uses
array.Values, decodeTransactionData, batch, and missingTxs to perform these two
checks and fail fast with clear error messages referencing the index i and the
offending ID.
- Around line 45-50: The Fetch method on ScheduledTransactionRequester currently
dereferences fields on the pointer parameter scheduledTransactionData without
checking for nil; add a nil guard at the start of
ScheduledTransactionRequester.Fetch to handle a nil data pointer (e.g., if data
== nil) and return a descriptive error (or initialize an empty
scheduledTransactionData if semantics require) before any access to its event
slice fields so lines that access data.<event...> cannot panic; update
callers/tests as needed to reflect the chosen behavior.
In `@storage/indexes/scheduled_transactions.go`:
- Around line 213-231: storeAllScheduledTransactions currently only checks
persistence via operation.KeyExists(rw.GlobalReader(), primaryKey) and misses
duplicate tx.IDs inside the scheduledTxs slice; update
storeAllScheduledTransactions to detect duplicates within the incoming batch by
tracking seen IDs (e.g., use a local map[int64]bool or
map[access.ScheduledTransactionID]struct{}) before calling
operation.UpsertByKey, return a clear error when a duplicate ID is found, and
ensure you still perform the existing KeyExists check for persisted duplicates;
reference makeScheduledTxPrimaryKey, scheduledTxs, operation.KeyExists, and
makeScheduledTxByAddrKey when locating where to add the in-memory duplicate
guard.
---
Nitpick comments:
In `@storage/indexes/scheduled_transactions.go`:
- Around line 243-344: The three methods Executed, Cancelled, and Failed
duplicate the same lock check, RetrieveByKey, status guard, and UpsertByKey
pattern; refactor by adding a helper on ScheduledTransactionsIndex (e.g.,
updateScheduledTx) that accepts lctx, rw, scheduledTxID and a mutation callback
func(*access.ScheduledTransaction) error, which performs the lock verification
(storage.LockIndexScheduledTransactionsIndex), loads the record
(operation.RetrieveByKey), enforces the Scheduled->terminal guard, calls the
callback to modify fields (set Status,
ExecutedTransactionID/CancelledTransactionID, FeesReturned/Deducted, etc.), then
UpsertByKey and returns errors; update Executed, Cancelled, and Failed to call
this helper with small inline callbacks that only set the specific fields.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
access/backends/extended/backend_scheduled_transactions.goengine/access/rest/experimental/models/scheduled_transaction.goengine/access/rest/experimental/request/get_scheduled_transactions.gointegration/tests/access/cohort3/extended_indexing_scheduled_txs_test.gomodule/execution/scripts.gomodule/state_synchronization/indexer/extended/scheduled_transaction_requester.gomodule/state_synchronization/indexer/extended/scheduled_transactions.gostorage/indexes/scheduled_transactions.go
🚧 Files skipped from review as they are similar to previous changes (1)
- module/state_synchronization/indexer/extended/scheduled_transactions.go
| func (t *ScheduledTransaction) Build( | ||
| tx *accessmodel.ScheduledTransaction, | ||
| link commonmodels.LinkGenerator, | ||
| expand map[string]bool, | ||
| ) error { | ||
| t.Id = strconv.FormatUint(tx.ID, 10) |
There was a problem hiding this comment.
Guard tx before dereferencing to prevent panic on malformed input.
tx is dereferenced at Line 18 without a nil check. A nil input will panic the handler path instead of returning a classified error.
💡 Proposed fix
func (t *ScheduledTransaction) Build(
tx *accessmodel.ScheduledTransaction,
link commonmodels.LinkGenerator,
expand map[string]bool,
) error {
+ if tx == nil {
+ return fmt.Errorf("scheduled transaction is nil")
+ }
+
t.Id = strconv.FormatUint(tx.ID, 10)As per coding guidelines “Treat all inputs as potentially byzantine and classify errors in a context-dependent manner; no code path is safe unless explicitly proven and documented”.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@engine/access/rest/experimental/models/scheduled_transaction.go` around lines
13 - 18, The Build method for ScheduledTransaction dereferences tx (tx.ID)
without a nil guard; add an explicit nil check at the top of
ScheduledTransaction.Build (check if tx == nil) and return a
classified/meaningful error (e.g., invalid input/BadRequest error) instead of
letting it panic; ensure you perform this guard before using tx.ID and any other
tx fields so all dereferences in Build are safe.
| if raw := r.GetQueryParam("limit"); raw != "" { | ||
| parsed, err := strconv.ParseUint(raw, 10, 32) | ||
| if err != nil { | ||
| return req, fmt.Errorf("invalid limit: %w", err) | ||
| } | ||
| req.Limit = uint32(parsed) | ||
| } |
There was a problem hiding this comment.
Validate and cap limit at parse time.
Line 30-Line 35 accepts any uint32, which allows pathological page sizes. Enforce a bounded range (1..max) before passing to backend logic.
Suggested fix
func NewGetScheduledTransactions(r *common.Request) (GetScheduledTransactions, error) {
var req GetScheduledTransactions
+ const maxLimit uint32 = 1000 // align with shared API limit constant if one exists
if raw := r.GetQueryParam("limit"); raw != "" {
parsed, err := strconv.ParseUint(raw, 10, 32)
if err != nil {
return req, fmt.Errorf("invalid limit: %w", err)
}
req.Limit = uint32(parsed)
+ if req.Limit == 0 || req.Limit > maxLimit {
+ return req, fmt.Errorf("invalid limit: must be between 1 and %d", maxLimit)
+ }
}As per coding guidelines: "Treat all inputs as potentially byzantine and classify errors in a context-dependent manner; no code path is safe unless explicitly proven and documented"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if raw := r.GetQueryParam("limit"); raw != "" { | |
| parsed, err := strconv.ParseUint(raw, 10, 32) | |
| if err != nil { | |
| return req, fmt.Errorf("invalid limit: %w", err) | |
| } | |
| req.Limit = uint32(parsed) | |
| } | |
| func NewGetScheduledTransactions(r *common.Request) (GetScheduledTransactions, error) { | |
| var req GetScheduledTransactions | |
| const maxLimit uint32 = 1000 // align with shared API limit constant if one exists | |
| if raw := r.GetQueryParam("limit"); raw != "" { | |
| parsed, err := strconv.ParseUint(raw, 10, 32) | |
| if err != nil { | |
| return req, fmt.Errorf("invalid limit: %w", err) | |
| } | |
| req.Limit = uint32(parsed) | |
| if req.Limit == 0 || req.Limit > maxLimit { | |
| return req, fmt.Errorf("invalid limit: must be between 1 and %d", maxLimit) | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@engine/access/rest/experimental/request/get_scheduled_transactions.go` around
lines 30 - 36, The parsing block for limit should validate and cap values to an
explicit safe range: after parsing with strconv.ParseUint (the existing code in
the r.GetQueryParam("limit") branch), check that parsed >= 1 and then cap it at
a defined constant (e.g., MaxScheduledTransactionsLimit or MAX_LIMIT) before
assigning to req.Limit; return a clear error if parsed == 0 (invalid page size)
and otherwise do req.Limit = uint32(min(parsed, MaxScheduledTransactionsLimit)).
Add the new MaxScheduledTransactionsLimit constant nearby and use it in the
validation to ensure backend never receives out-of-range values.
| // GetScheduledTransactions holds parsed request params for the list endpoints. | ||
| type GetAccountScheduledTransactions struct { |
There was a problem hiding this comment.
Fix the type comment to match the declared struct.
Line 55 says GetScheduledTransactions, but the type declared on Line 56 is GetAccountScheduledTransactions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@engine/access/rest/experimental/request/get_scheduled_transactions.go` around
lines 55 - 56, The comment above the struct is out of sync: update the type
comment to match the declared struct name GetAccountScheduledTransactions (or
rename the struct if intended to be GetScheduledTransactions); specifically edit
the comment that currently reads "GetScheduledTransactions holds parsed request
params for the list endpoints." so it references GetAccountScheduledTransactions
and accurately describes that struct.
| if raw := r.GetQueryParam("start_time"); raw != "" { | ||
| v, err := strconv.ParseUint(raw, 10, 64) | ||
| if err != nil { | ||
| return fmt.Errorf("invalid start_time: %w", err) | ||
| } | ||
| filter.StartTime = &v | ||
| } | ||
|
|
||
| if raw := r.GetQueryParam("end_time"); raw != "" { | ||
| v, err := strconv.ParseUint(raw, 10, 64) | ||
| if err != nil { | ||
| return fmt.Errorf("invalid end_time: %w", err) | ||
| } | ||
| filter.EndTime = &v | ||
| } |
There was a problem hiding this comment.
Reject invalid time windows where start_time > end_time.
Both fields are validated individually, but the combined constraint is not checked. This should return a 4xx parse error.
Suggested fix
if raw := r.GetQueryParam("end_time"); raw != "" {
v, err := strconv.ParseUint(raw, 10, 64)
if err != nil {
return fmt.Errorf("invalid end_time: %w", err)
}
filter.EndTime = &v
}
+
+ if filter.StartTime != nil && filter.EndTime != nil && *filter.StartTime > *filter.EndTime {
+ return fmt.Errorf("invalid time range: start_time must be <= end_time")
+ }As per coding guidelines: "Treat all inputs as potentially byzantine and classify errors in a context-dependent manner; no code path is safe unless explicitly proven and documented"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if raw := r.GetQueryParam("start_time"); raw != "" { | |
| v, err := strconv.ParseUint(raw, 10, 64) | |
| if err != nil { | |
| return fmt.Errorf("invalid start_time: %w", err) | |
| } | |
| filter.StartTime = &v | |
| } | |
| if raw := r.GetQueryParam("end_time"); raw != "" { | |
| v, err := strconv.ParseUint(raw, 10, 64) | |
| if err != nil { | |
| return fmt.Errorf("invalid end_time: %w", err) | |
| } | |
| filter.EndTime = &v | |
| } | |
| if raw := r.GetQueryParam("start_time"); raw != "" { | |
| v, err := strconv.ParseUint(raw, 10, 64) | |
| if err != nil { | |
| return fmt.Errorf("invalid start_time: %w", err) | |
| } | |
| filter.StartTime = &v | |
| } | |
| if raw := r.GetQueryParam("end_time"); raw != "" { | |
| v, err := strconv.ParseUint(raw, 10, 64) | |
| if err != nil { | |
| return fmt.Errorf("invalid end_time: %w", err) | |
| } | |
| filter.EndTime = &v | |
| } | |
| if filter.StartTime != nil && filter.EndTime != nil && *filter.StartTime > *filter.EndTime { | |
| return fmt.Errorf("invalid time range: start_time must be <= end_time") | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@engine/access/rest/experimental/request/get_scheduled_transactions.go` around
lines 126 - 140, After parsing start_time and end_time into filter.StartTime and
filter.EndTime, add a validation step that, when both filter.StartTime and
filter.EndTime are non-nil, compares their uint64 values and returns a 4xx parse
error if *filter.StartTime > *filter.EndTime (e.g., return a descriptive error
like "invalid time window: start_time > end_time"); update the function that
calls r.GetQueryParam / strconv.ParseUint (the block that sets filter.StartTime
and filter.EndTime) to perform this check immediately after both fields can be
present so the request is rejected with a client-side parse error when the
window is invalid.
| if nextCursor == "" { | ||
| break | ||
| } | ||
| url = fmt.Sprintf("%s&cursor=%s", firstURL, nextCursor) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n integration/tests/access/cohort3/extended_indexing_scheduled_txs_test.go | sed -n '185,210p'Repository: onflow/flow-go
Length of output: 886
🏁 Script executed:
fd cursor_scheduled_transactions.go --type fRepository: onflow/flow-go
Length of output: 130
🏁 Script executed:
rg -n "EncodeScheduledTxCursor|base64|URLEncoding|RawURLEncoding|StdEncoding" --type goRepository: onflow/flow-go
Length of output: 9460
🏁 Script executed:
rg -n "http.Get|http.Do|Client.*Timeout" integration/tests/access/cohort3/extended_indexing_scheduled_txs_test.goRepository: onflow/flow-go
Length of output: 104
🏁 Script executed:
rg -n "http.Client|http.Get|timeout|TimeoutType" integration/tests --type go -B2 -A2 | head -100Repository: onflow/flow-go
Length of output: 6849
🏁 Script executed:
rg -n "Client.*Timeout|Request.*Timeout|Timeout.*time\." integration/tests/access --type go | head -20Repository: onflow/flow-go
Length of output: 1744
🏁 Script executed:
cat -n integration/tests/access/cohort3/extended_indexing_scheduled_txs_test.go | sed -n '55,120p'Repository: onflow/flow-go
Length of output: 3064
🏁 Script executed:
cat -n integration/tests/access/cohort3/extended_indexing_scheduled_txs_test.go | sed -n '195,215p'Repository: onflow/flow-go
Length of output: 714
Fix HTTP timeout and use query encoding for cursor parameter.
Line 201's bare http.Get() can hang indefinitely in the polling loop. Create an http.Client with a timeout or pass a context with timeout to prevent flaky tests. Additionally, while base64.RawURLEncoding only produces query-safe characters, applying net/url.QueryEscape() to the cursor at line 194 follows defensive best practices for URL construction.
Suggested fixes
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
+ neturl "net/url"
"time"
@@
+ var client = &http.Client{Timeout: 10 * time.Second}
+
func (s *ExtendedIndexingSuite) fetchScheduledTxJSON(url string) map[string]any {
- resp, err := http.Get(url) //nolint:gosec
+ resp, err := client.Get(url)
@@
- url = fmt.Sprintf("%s&cursor=%s", firstURL, nextCursor)
+ url = fmt.Sprintf("%s&cursor=%s", firstURL, neturl.QueryEscape(nextCursor))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@integration/tests/access/cohort3/extended_indexing_scheduled_txs_test.go` at
line 194, The polling loop builds the request URL unsafely and uses a bare
http.Get that can hang; change the URL construction to escape the cursor (use
net/url.QueryEscape on nextCursor when building url from firstURL and
nextCursor) and replace the blocking http.Get call with an HTTP request that
respects timeouts (either create an http.Client with a Timeout or perform the
request with a context with timeout) so the polling cannot hang indefinitely;
update the code paths around the variables firstURL, nextCursor and the call
site of http.Get to use the escaped cursor and the timeout-aware client/context.
| resp, err := http.Get(url) //nolint:gosec | ||
| if err != nil { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's verify the file exists and read the context around lines 201-202
if [ -f "integration/tests/access/cohort3/extended_indexing_scheduled_txs_test.go" ]; then
wc -l "integration/tests/access/cohort3/extended_indexing_scheduled_txs_test.go"
echo "---"
sed -n '190,215p' "integration/tests/access/cohort3/extended_indexing_scheduled_txs_test.go"
else
echo "File not found"
fiRepository: onflow/flow-go
Length of output: 791
🏁 Script executed:
# Let's also check how this fetchScheduledTxJSON function is used
rg "fetchScheduledTxJSON" integration/tests/access/cohort3/extended_indexing_scheduled_txs_test.go -B 2 -A 2Repository: onflow/flow-go
Length of output: 1169
🏁 Script executed:
# Check if there are similar patterns with http.Get in the codebase
rg "http\.Get\(" --type go | head -20Repository: onflow/flow-go
Length of output: 570
🏁 Script executed:
# Check if there are any existing patterns with http.Client and timeout in test files
rg "http\.Client.*Timeout" --type go integration/Repository: onflow/flow-go
Length of output: 40
🏁 Script executed:
# Read more context around the function to understand the complete pattern
sed -n '195,225p' integration/tests/access/cohort3/extended_indexing_scheduled_txs_test.goRepository: onflow/flow-go
Length of output: 890
🏁 Script executed:
# Check how fetchScheduledTxJSONBody is called with require.Eventually
sed -n '1,50p' integration/tests/access/cohort3/extended_indexing_scheduled_txs_test.goRepository: onflow/flow-go
Length of output: 1964
🏁 Script executed:
# Look for the import statements to see what's already imported
grep -n "^import\|^import (" integration/tests/access/cohort3/extended_indexing_scheduled_txs_test.go -A 30 | head -40Repository: onflow/flow-go
Length of output: 1218
Bare http.Get in polling context risks indefinite test hangs; add timeout.
fetchScheduledTxJSON is called within fetchScheduledTxJSONBody via require.Eventually, a polling loop. Without a timeout, transport stalls (e.g., TCP hangs) cause the HTTP request to block indefinitely, hanging the entire test. Add a module-level client with a timeout as shown:
Suggested fix
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"time"
+
+var scheduledTxHTTPClient = &http.Client{Timeout: 5 * time.Second}
)
func (s *ExtendedIndexingSuite) fetchScheduledTxJSON(url string) map[string]any {
- resp, err := http.Get(url) //nolint:gosec
+ resp, err := scheduledTxHTTPClient.Get(url) //nolint:gosec
if err != nil {
return nil
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| resp, err := http.Get(url) //nolint:gosec | |
| if err != nil { | |
| resp, err := scheduledTxHTTPClient.Get(url) //nolint:gosec | |
| if err != nil { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@integration/tests/access/cohort3/extended_indexing_scheduled_txs_test.go`
around lines 201 - 202, The test uses bare http.Get inside fetchScheduledTxJSON
(called by fetchScheduledTxJSONBody within require.Eventually) which can block
forever; create a package-level http.Client with a non-zero Timeout and replace
the http.Get call in fetchScheduledTxJSON with client.Get (or use client.Do) so
each poll has a bounded timeout; ensure the new client is reused across calls
(module-level variable) and propagate/context any errors unchanged.
| return &tx, nil | ||
| } | ||
|
|
||
| // makeScheduledTxPrimaryKey creates a primary key [code][~id]. |
There was a problem hiding this comment.
| // makeScheduledTxPrimaryKey creates a primary key [code][~id]. | |
| // makeScheduledTxPrimaryKey creates a primary key [code][~txID]. |
let's be specific
| CreatedTransactionID flow.Identifier | ||
| ExecutedTransactionID flow.Identifier | ||
| CancelledTransactionID flow.Identifier |
There was a problem hiding this comment.
Could we add some comments, and mention the ScheduledTransactionStatus value for each of them to be set. such as:
// ExecutedTransactionID is the tx ID that emits the Executed event, it's updated when Status becomes ScheduledTxStatusExecuted
| // transaction where the scheduled transaction was first created. | ||
| // When true, the `CreatedTransactionID`, `TransactionHandlerUUID`, and `TransactionHandlerPublicPath` | ||
| // fields are undefined. | ||
| IsPlaceholder bool |
There was a problem hiding this comment.
you mean for instance, the scheduled tx was created at height 100, but the AN bootstrapped at height 200, and the scheduled tx executed at 300?
What about we just keeping CreatedTransactionID as ZeroID and drop this field?
| } | ||
|
|
||
| if expandOptions.Transaction { | ||
| allScheduledTxs, err := b.transactionsProvider.ScheduledTransactionsByBlockID(ctx, header) |
There was a problem hiding this comment.
if we store the blockID and block height inside the scheduled tx, then we could save 3 queries by simply calling headers.BlockIDByHeight.
Note: ScheduledTransactionsByBlockID only needs blockD and height, it doesn't need the whole header or same contract.
| } | ||
|
|
||
| for i := range page.Transactions { | ||
| if err := b.expand(ctx, &page.Transactions[i], expandOptions, encodingVersion); err != nil { |
There was a problem hiding this comment.
This is going to be very heavy. Imaging to query 50 expanded tx. Each expand call will do 5-7 more queries. This is going to be expensive and slow.
Can we group some calls? For instance, multiple scheduled transaction could request the same block, etc.
We could pass the page.Transactions to a expandBatch(page.Transactions) function which does the grouping.
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (4)
engine/access/rest/experimental/models/scheduled_transaction.go (2)
90-95:⚠️ Potential issue | 🟠 MajorDo not emit a non-resolvable
_expandable.handler_contractplaceholder.Returning
"handler_contract"as a literal token is not actionable and can break clients that treat_expandableentries as real relations.Suggested fix
if tx.HandlerContract != nil { t.HandlerContract = new(Contract) t.HandlerContract.Build(tx.HandlerContract) -} else { - t.Expandable.HandlerContract = "handler_contract" // TODO: this will be implemented in the next PR }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@engine/access/rest/experimental/models/scheduled_transaction.go` around lines 90 - 95, The code currently emits a non-resolvable placeholder by setting t.Expandable.HandlerContract = "handler_contract"; instead, stop returning a fake token: when tx.HandlerContract is nil leave t.Expandable.HandlerContract empty (or unset) and ensure t.HandlerContract remains nil; update the block around tx.HandlerContract / t.HandlerContract / t.Expandable.HandlerContract to remove the literal "handler_contract" assignment so clients won't see an actionable but invalid relation.
14-19:⚠️ Potential issue | 🟠 MajorAdd a nil guard for
txinBuild.
txis dereferenced before validation; malformed upstream input can panic this path.Suggested fix
func (t *ScheduledTransaction) Build( tx *accessmodel.ScheduledTransaction, link commonmodels.LinkGenerator, expand map[string]bool, ) error { + if tx == nil { + return fmt.Errorf("scheduled transaction is nil") + } + t.Id = strconv.FormatUint(tx.ID, 10)As per coding guidelines, “Treat all inputs as potentially byzantine and classify errors in a context-dependent manner; no code path is safe unless explicitly proven and documented”.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@engine/access/rest/experimental/models/scheduled_transaction.go` around lines 14 - 19, In ScheduledTransaction.Build, add a nil-check for the incoming tx parameter before any dereference (e.g., at the top of Build) and return a descriptive error if tx == nil; locate the method ScheduledTransaction.Build and ensure you validate tx, returning an error (not panicking) so subsequent uses like tx.ID are safe.access/backends/extended/backend_scheduled_transactions.go (1)
370-373:⚠️ Potential issue | 🟡 MinorUpdate
expanddoc comment to match current behavior.The comment still describes executed-only/result-only behavior and “no error returns,” but the function also expands handler contracts, handles failed transactions, and returns errors in multiple branches.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@access/backends/extended/backend_scheduled_transactions.go` around lines 370 - 373, Update the doc comment for the function expand to accurately describe current behavior: state that it enriches scheduled transactions with transaction results when available, expands handler contract information, properly handles and annotates failed transactions, and can return errors from multiple branches (rather than "no error returns expected"); locate the comment immediately above the expand function and replace the outdated lines with a concise summary listing these behaviors and the fact that errors may be returned.model/access/scheduled_transaction.go (1)
27-33:⚠️ Potential issue | 🟠 MajorAvoid panicking in enum
String()fallbacks.Both
String()methods panic on unknown values. Return a safe fallback string instead to avoid process crashes on unexpected enum states.Suggested fix
func (s ScheduledTransactionStatus) String() string { if str, ok := scheduledTransactionStatusStrings[s]; ok { return str } - panic(fmt.Sprintf("unknown scheduled transaction status: %d", s)) + return fmt.Sprintf("unknown(%d)", s) } ... func (p ScheduledTransactionPriority) String() string { if str, ok := scheduledTransactionPriorityStrings[p]; ok { return str } - panic(fmt.Sprintf("unknown scheduled transaction priority: %d", p)) + return fmt.Sprintf("unknown(%d)", p) }As per coding guidelines, “Treat all inputs as potentially byzantine and classify errors in a context-dependent manner; no code path is safe unless explicitly proven and documented”.
Also applies to: 68-74
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@model/access/scheduled_transaction.go` around lines 27 - 33, The ScheduledTransactionStatus.String() method currently panics on unknown values; replace the panic with a safe fallback return (e.g. return fmt.Sprintf("ScheduledTransactionStatus(%d)", s)) and ensure fmt is imported/used, and apply the same change to the other enum String() method(s) in this file (the ones that currently use a map lookup like scheduledTransactionStatusStrings and panic on miss) so unknown enum values produce a descriptive string instead of crashing.
🧹 Nitpick comments (1)
utils/visited/visited.go (1)
11-15: Consider returning a pointer from constructor for consistency.The constructor returns a value type while all methods use pointer receivers. Although this works because the map is a reference type, returning
*Visited[T]would be more idiomatic and consistent with the pointer receiver pattern.♻️ Optional: Return pointer from constructor
-func New[T comparable]() Visited[T] { - return Visited[T]{ +func New[T comparable]() *Visited[T] { + return &Visited[T]{ m: make(map[T]struct{}), } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utils/visited/visited.go` around lines 11 - 15, Change the New constructor to return a pointer to Visited[T] instead of a value: update the function signature New[T comparable]() to return *Visited[T] and return &Visited[T]{ m: make(map[T]struct{}) }; this aligns with the existing pointer-receiver methods on type Visited and ensures consistent usage of New and methods like (v *Visited) Add / Has.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@access/backends/extended/backend_scheduled_transactions.go`:
- Around line 287-288: The error string wrongly mentions "creation block
timestamp" while this branch is looking up the cancellation completion
timestamp; update the fmt.Errorf message in the cancellation-path where the
lookup occurs to say something like "failed to get cancellation completion
timestamp for scheduled tx %d: %w" (reference the return statement that uses
fmt.Errorf and the tx variable/tx.ID in backend_scheduled_transactions.go).
- Around line 440-452: Guard the returned account from
b.scriptExecutor.GetAccountAtBlockHeight before accessing account.Contracts:
after calling GetAccountAtBlockHeight (with ctx, tx.TransactionHandlerOwner,
latest.Height) check if account == nil and return a classified error
(wrap/contextualize the error indicating "nil account returned for tx handler
<tx.TransactionHandlerOwner>" or a specific sentinel error) instead of
dereferencing; keep the existing error wrapping for the GetAccountAtBlockHeight
and transactionHandlerContract calls and ensure you still look up contractID via
transactionHandlerContract(tx.TransactionHandlerTypeIdentifier) only after the
nil-account guard so accessing account.Contracts[contractID] cannot panic.
In `@engine/access/rest/experimental/models/model_scheduled_transaction.go`:
- Around line 42-43: Update the comment for the IsPlaceholder field in
model_scheduled_transaction.go to remove the stale references to specific fields
(block_height, transaction_id, tx_index, event_index) and instead use a generic,
accurate description: state that IsPlaceholder is true when the scheduled
transaction was created during bootstrapping (not from a protocol event) and
that transaction location/details are omitted in that case; reference the
IsPlaceholder field name so the maintainer can find and replace the outdated
comment.
In `@storage/indexes/scheduled_transactions.go`:
- Around line 261-266: The status checks currently call operation.RetrieveByKey
with rw.GlobalReader(), which can ignore uncommitted writes in the same batch
and allow duplicate transitions; change those calls to use the batch-aware
reader (e.g., rw.Reader() or the batch-aware reader API your RW type exposes) so
the read sees prior uncommitted writes in the same batch. Update every
occurrence where operation.RetrieveByKey is invoked with rw.GlobalReader() in
this file (including the blocks around the current snippet, and the other
instances you noted at the regions corresponding to lines ~298-303 and ~336-341)
to pass the batch-aware reader instead, ensuring the status check uses the
per-batch view before allowing a transition.
In `@utils/visited/visited.go`:
- Around line 21-27: The Visit method on Visited[T] can panic if the struct is
zero-valued because s.m may be nil; modify Visit (method Visit on type
Visited[T]) to lazily initialize s.m when nil (e.g., if s.m == nil { s.m =
make(map[T]struct{}) }) before writing, so callers can use zero-value Visited[T]
safely (alternatively, add a clear comment on New() requirement if you prefer
documenting instead of lazy init).
---
Duplicate comments:
In `@access/backends/extended/backend_scheduled_transactions.go`:
- Around line 370-373: Update the doc comment for the function expand to
accurately describe current behavior: state that it enriches scheduled
transactions with transaction results when available, expands handler contract
information, properly handles and annotates failed transactions, and can return
errors from multiple branches (rather than "no error returns expected"); locate
the comment immediately above the expand function and replace the outdated lines
with a concise summary listing these behaviors and the fact that errors may be
returned.
In `@engine/access/rest/experimental/models/scheduled_transaction.go`:
- Around line 90-95: The code currently emits a non-resolvable placeholder by
setting t.Expandable.HandlerContract = "handler_contract"; instead, stop
returning a fake token: when tx.HandlerContract is nil leave
t.Expandable.HandlerContract empty (or unset) and ensure t.HandlerContract
remains nil; update the block around tx.HandlerContract / t.HandlerContract /
t.Expandable.HandlerContract to remove the literal "handler_contract" assignment
so clients won't see an actionable but invalid relation.
- Around line 14-19: In ScheduledTransaction.Build, add a nil-check for the
incoming tx parameter before any dereference (e.g., at the top of Build) and
return a descriptive error if tx == nil; locate the method
ScheduledTransaction.Build and ensure you validate tx, returning an error (not
panicking) so subsequent uses like tx.ID are safe.
In `@model/access/scheduled_transaction.go`:
- Around line 27-33: The ScheduledTransactionStatus.String() method currently
panics on unknown values; replace the panic with a safe fallback return (e.g.
return fmt.Sprintf("ScheduledTransactionStatus(%d)", s)) and ensure fmt is
imported/used, and apply the same change to the other enum String() method(s) in
this file (the ones that currently use a map lookup like
scheduledTransactionStatusStrings and panic on miss) so unknown enum values
produce a descriptive string instead of crashing.
---
Nitpick comments:
In `@utils/visited/visited.go`:
- Around line 11-15: Change the New constructor to return a pointer to
Visited[T] instead of a value: update the function signature New[T comparable]()
to return *Visited[T] and return &Visited[T]{ m: make(map[T]struct{}) }; this
aligns with the existing pointer-receiver methods on type Visited and ensures
consistent usage of New and methods like (v *Visited) Add / Has.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
access/backends/extended/backend.goaccess/backends/extended/backend_base.goaccess/backends/extended/backend_scheduled_transactions.goaccess/backends/extended/backend_scheduled_transactions_test.goengine/access/rest/experimental/models/model_scheduled_transaction.goengine/access/rest/experimental/models/scheduled_transaction.goengine/access/rest/experimental/routes/scheduled_transactions_test.gomodel/access/scheduled_transaction.gostorage/indexes/scheduled_transactions.goutils/visited/visited.goutils/visited/visited_test.go
✅ Files skipped from review due to trivial changes (1)
- access/backends/extended/backend_base.go
| return nil, fmt.Errorf("failed to get creation block timestamp for scheduled tx %d: %w", tx.ID, err) | ||
| } |
There was a problem hiding this comment.
Fix the cancelled-path error message wording.
This branch reports “creation block timestamp” while handling cancellation completion timestamp lookup.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@access/backends/extended/backend_scheduled_transactions.go` around lines 287
- 288, The error string wrongly mentions "creation block timestamp" while this
branch is looking up the cancellation completion timestamp; update the
fmt.Errorf message in the cancellation-path where the lookup occurs to say
something like "failed to get cancellation completion timestamp for scheduled tx
%d: %w" (reference the return statement that uses fmt.Errorf and the tx
variable/tx.ID in backend_scheduled_transactions.go).
| account, err := b.scriptExecutor.GetAccountAtBlockHeight(ctx, tx.TransactionHandlerOwner, latest.Height) | ||
| if err != nil { | ||
| return fmt.Errorf("failed to get account for tx handler %s: %w", tx.TransactionHandlerOwner, err) | ||
| } | ||
|
|
||
| contractID, err := transactionHandlerContract(tx.TransactionHandlerTypeIdentifier) | ||
| if err != nil { | ||
| return fmt.Errorf("failed to get contract ID for tx handler %s: %w", tx.TransactionHandlerTypeIdentifier, err) | ||
| } | ||
| contract, ok := account.Contracts[contractID] | ||
| if !ok { | ||
| return fmt.Errorf("contract %q not found in account %s", contractID, tx.TransactionHandlerOwner) | ||
| } |
There was a problem hiding this comment.
Guard account before accessing account.Contracts.
GetAccountAtBlockHeight can theoretically return (nil, nil); account.Contracts would panic in that case. Add a nil-account guard and return a classified error instead.
Suggested fix
account, err := b.scriptExecutor.GetAccountAtBlockHeight(ctx, tx.TransactionHandlerOwner, latest.Height)
if err != nil {
return fmt.Errorf("failed to get account for tx handler %s: %w", tx.TransactionHandlerOwner, err)
}
+if account == nil {
+ return fmt.Errorf("account %s not found at height %d: %w", tx.TransactionHandlerOwner, latest.Height, storage.ErrNotFound)
+}As per coding guidelines, “Treat all inputs as potentially byzantine and classify errors in a context-dependent manner; no code path is safe unless explicitly proven and documented”.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@access/backends/extended/backend_scheduled_transactions.go` around lines 440
- 452, Guard the returned account from b.scriptExecutor.GetAccountAtBlockHeight
before accessing account.Contracts: after calling GetAccountAtBlockHeight (with
ctx, tx.TransactionHandlerOwner, latest.Height) check if account == nil and
return a classified error (wrap/contextualize the error indicating "nil account
returned for tx handler <tx.TransactionHandlerOwner>" or a specific sentinel
error) instead of dereferencing; keep the existing error wrapping for the
GetAccountAtBlockHeight and transactionHandlerContract calls and ensure you
still look up contractID via
transactionHandlerContract(tx.TransactionHandlerTypeIdentifier) only after the
nil-account guard so accessing account.Contracts[contractID] cannot panic.
| // True if the scheduled transaction was created during bootstrapping based on the current chain state, not based on a protocol event. When true, block_height, transaction_id, tx_index, and event_index are absent. | ||
| IsPlaceholder bool `json:"is_placeholder,omitempty"` |
There was a problem hiding this comment.
Fix stale IsPlaceholder comment field references.
The comment mentions fields (block_height, transaction_id, tx_index, event_index) that are not part of this response model, which makes the contract confusing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@engine/access/rest/experimental/models/model_scheduled_transaction.go` around
lines 42 - 43, Update the comment for the IsPlaceholder field in
model_scheduled_transaction.go to remove the stale references to specific fields
(block_height, transaction_id, tx_index, event_index) and instead use a generic,
accurate description: state that IsPlaceholder is true when the scheduled
transaction was created during bootstrapping (not from a protocol event) and
that transaction location/details are omitted in that case; reference the
IsPlaceholder field name so the maintainer can find and replace the outdated
comment.
| if err := operation.RetrieveByKey(rw.GlobalReader(), key, &tx); err != nil { | ||
| return fmt.Errorf("could not retrieve scheduled transaction %d: %w", scheduledTxID, err) | ||
| } | ||
| if tx.Status != access.ScheduledTxStatusScheduled { | ||
| return fmt.Errorf("tx %d already in terminal state %s: %w", scheduledTxID, tx.Status, storage.ErrInvalidStatusTransition) | ||
| } |
There was a problem hiding this comment.
Use a batch-visible read source for status transitions.
These checks read from rw.GlobalReader(), which can ignore uncommitted writes already queued in the same batch. If the same scheduled tx is transitioned twice before commit, the second transition can pass validation and overwrite the first terminal state.
Please read the current value from a batch-aware reader (or enforce per-batch transition dedupe before invoking these methods).
As per coding guidelines, “Treat all inputs as potentially byzantine and classify errors in a context-dependent manner; no code path is safe unless explicitly proven and documented”.
Also applies to: 298-303, 336-341
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@storage/indexes/scheduled_transactions.go` around lines 261 - 266, The status
checks currently call operation.RetrieveByKey with rw.GlobalReader(), which can
ignore uncommitted writes in the same batch and allow duplicate transitions;
change those calls to use the batch-aware reader (e.g., rw.Reader() or the
batch-aware reader API your RW type exposes) so the read sees prior uncommitted
writes in the same batch. Update every occurrence where operation.RetrieveByKey
is invoked with rw.GlobalReader() in this file (including the blocks around the
current snippet, and the other instances you noted at the regions corresponding
to lines ~298-303 and ~336-341) to pass the batch-aware reader instead, ensuring
the status check uses the per-batch view before allowing a transition.
| func (s *Visited[T]) Visit(key T) bool { | ||
| if _, ok := s.m[key]; ok { | ||
| return true | ||
| } | ||
| s.m[key] = struct{}{} | ||
| return false | ||
| } |
There was a problem hiding this comment.
Potential panic if zero-value Visited is used without calling New().
If a caller instantiates Visited[T]{} directly instead of using New(), the Visit() method will panic at line 25 when writing to a nil map. Consider either:
- Adding lazy initialization for defensive coding, or
- Documenting in the type's comment that
New()must be used.
🛡️ Option 1: Add lazy initialization for defensive coding
func (s *Visited[T]) Visit(key T) bool {
+ if s.m == nil {
+ s.m = make(map[T]struct{})
+ }
if _, ok := s.m[key]; ok {
return true
}
s.m[key] = struct{}{}
return false
}📝 Option 2: Document initialization requirement
// Visited is a simple object that tracks whether or not a particular value has been visited.
// Use this when iterating over a collection and you need to know if a value has been encountered.
//
+// Use New() to create instances. The zero value is not safe to use.
+//
// CAUTION: Not concurrency safe.
type Visited[T comparable] struct {
m map[T]struct{}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@utils/visited/visited.go` around lines 21 - 27, The Visit method on
Visited[T] can panic if the struct is zero-valued because s.m may be nil; modify
Visit (method Visit on type Visited[T]) to lazily initialize s.m when nil (e.g.,
if s.m == nil { s.m = make(map[T]struct{}) }) before writing, so callers can use
zero-value Visited[T] safely (alternatively, add a clear comment on New()
requirement if you prefer documenting instead of lazy init).
fxamacker
left a comment
There was a problem hiding this comment.
Nice! I mostly focused on non-test code and left some suggestions.
|
|
||
| // scheduledTxCursor is the JSON shape of a pagination cursor (opaque to clients). | ||
| type scheduledTxCursor struct { | ||
| ID uint64 `json:"i"` |
There was a problem hiding this comment.
| ID uint64 `json:"i"` | |
| ID uint64 `json:"id"` |
| type GetScheduledTransactions struct { | ||
| Address *flow.Address | ||
| Limit uint32 | ||
| Cursor *accessmodel.ScheduledTransactionCursor | ||
| Filter extended.ScheduledTransactionFilter | ||
| ExpandOptions extended.ScheduledTransactionExpandOptions | ||
| } |
There was a problem hiding this comment.
Address isn't used, so it can be removed.
| type GetScheduledTransactions struct { | |
| Address *flow.Address | |
| Limit uint32 | |
| Cursor *accessmodel.ScheduledTransactionCursor | |
| Filter extended.ScheduledTransactionFilter | |
| ExpandOptions extended.ScheduledTransactionExpandOptions | |
| } | |
| type GetScheduledTransactions struct { | |
| Limit uint32 | |
| Cursor *accessmodel.ScheduledTransactionCursor | |
| Filter extended.ScheduledTransactionFilter | |
| ExpandOptions extended.ScheduledTransactionExpandOptions | |
| } |
| address, err := parser.ParseAddress(r.GetVar("address"), r.Chain) | ||
| if err != nil { | ||
| return GetAccountScheduledTransactions{}, err | ||
| } |
There was a problem hiding this comment.
Should we check if the provided req.Filter.TransactionHandlerOwner is the same as the provided address?
Maybe we can return bad request if they are different.
| type TransactionSchedulerScheduledEvent struct { | ||
| ID uint64 | ||
| Priority uint8 | ||
| Timestamp cadence.UFix64 | ||
| ExecutionEffort uint64 | ||
| Fees cadence.UFix64 | ||
| TransactionHandlerOwner flow.Address | ||
| TransactionHandlerTypeIdentifier string | ||
| TransactionHandlerUUID uint64 | ||
| TransactionHandlerPublicPath string | ||
| } |
There was a problem hiding this comment.
Maybe use regular Go types so caller doesn't need to convert.
| type TransactionSchedulerScheduledEvent struct { | |
| ID uint64 | |
| Priority uint8 | |
| Timestamp cadence.UFix64 | |
| ExecutionEffort uint64 | |
| Fees cadence.UFix64 | |
| TransactionHandlerOwner flow.Address | |
| TransactionHandlerTypeIdentifier string | |
| TransactionHandlerUUID uint64 | |
| TransactionHandlerPublicPath string | |
| } | |
| type TransactionSchedulerScheduledEvent struct { | |
| ID uint64 | |
| Priority uint8 | |
| Timestamp uint64 | |
| ExecutionEffort uint64 | |
| Fees uint64 | |
| TransactionHandlerOwner flow.Address | |
| TransactionHandlerTypeIdentifier string | |
| TransactionHandlerUUID uint64 | |
| TransactionHandlerPublicPath string | |
| } |
| // scheduledTxPrimaryKeyLen is [code(1)][~id(8)] = 9 bytes | ||
| scheduledTxPrimaryKeyLen = 1 + heightLen | ||
| // scheduledTxByAddrKeyLen is [code(1)][address(8)][~id(8)] = 17 bytes | ||
| scheduledTxByAddrKeyLen = 1 + flow.AddressLength + heightLen |
There was a problem hiding this comment.
I think we intend to use scheduled transaction ID in the key, not height.
| // scheduledTxPrimaryKeyLen is [code(1)][~id(8)] = 9 bytes | |
| scheduledTxPrimaryKeyLen = 1 + heightLen | |
| // scheduledTxByAddrKeyLen is [code(1)][address(8)][~id(8)] = 17 bytes | |
| scheduledTxByAddrKeyLen = 1 + flow.AddressLength + heightLen | |
| idLen = 8 | |
| // scheduledTxPrimaryKeyLen is [code(1)][~id(8)] = 9 bytes | |
| scheduledTxPrimaryKeyLen = 1 + idLen | |
| // scheduledTxByAddrKeyLen is [code(1)][address(8)][~id(8)] = 17 bytes | |
| scheduledTxByAddrKeyLen = 1 + flow.AddressLength + idLen |
Add new storage, indexer and APIs for Scheduled Transactions to the extended API.
Summary by CodeRabbit
New Features
Tests