diff --git a/docs/CHANGELOG.md b/docs/CHANGELOG.md
index 95075fae8df..3e1913b3135 100644
--- a/docs/CHANGELOG.md
+++ b/docs/CHANGELOG.md
@@ -29,9 +29,9 @@ Users can select any of the artifacts depending on their testing needs for their
#### `fill`
-- ✨ Add the `ported_from` test marker to track Python test cases that were converted from static fillers in [ethereum/tests](https://github.com/ethereum/tests) repository [#1590](https://github.com/ethereum/execution-spec-tests/pull/1590).
-- ✨ Add a new pytest plugin, `ported_tests`, that lists the static fillers and PRs from `ported_from` markers for use in the coverage Github Workflow [#1634](https://github.com/ethereum/execution-spec-tests/pull/1634).
-- ✨ Enable two-phase filling of fixtures with shared pre-allocation groups and add a `BlockchainEngineReorgFixture` format [#1606](https://github.com/ethereum/execution-spec-tests/pull/1706).
+- ✨ Add the `ported_from` test marker to track Python test cases that were converted from static fillers in [ethereum/tests](https://github.com/ethereum/tests) repository ([#1590](https://github.com/ethereum/execution-spec-tests/pull/1590)).
+- ✨ Add a new pytest plugin, `ported_tests`, that lists the static fillers and PRs from `ported_from` markers for use in the coverage Github Workflow ([#1634](https://github.com/ethereum/execution-spec-tests/pull/1634)).
+- ✨ Enable two-phase filling of fixtures with pre-allocation groups and add a `BlockchainEngineXFixture` format ([#1706](https://github.com/ethereum/execution-spec-tests/pull/1706), [#1760](https://github.com/ethereum/execution-spec-tests/pull/1760)).
- 🔀 Refactor: Encapsulate `fill`'s fixture output options (`--output`, `--flat-output`, `--single-fixture-per-file`) into a `FixtureOutput` class ([#1471](https://github.com/ethereum/execution-spec-tests/pull/1471),[#1612](https://github.com/ethereum/execution-spec-tests/pull/1612)).
- ✨ Don't warn about a "high Transaction gas_limit" for `zkevm` tests ([#1598](https://github.com/ethereum/execution-spec-tests/pull/1598)).
- 🐞 `fill` no longer writes generated fixtures into an existing, non-empty output directory; it must now be empty or `--clean` must be used to delete it first ([#1608](https://github.com/ethereum/execution-spec-tests/pull/1608)).
diff --git a/docs/library/cli/extract_config.md b/docs/library/cli/extract_config.md
index d4a49d536e6..3a4afae6599 100644
--- a/docs/library/cli/extract_config.md
+++ b/docs/library/cli/extract_config.md
@@ -120,7 +120,7 @@ Since Hive doesn't directly expose container IDs, the tool uses a detection mech
The tool supports:
- Individual fixture JSON files (BlockchainFixture format)
-- SharedPreStateGroup JSON files
+- PreAllocGroup JSON files
- Directories containing multiple fixture files
## Troubleshooting
diff --git a/docs/navigation.md b/docs/navigation.md
index fedffde85b3..ecb5b334254 100644
--- a/docs/navigation.md
+++ b/docs/navigation.md
@@ -42,7 +42,7 @@
* [State Tests](running_tests/test_formats/state_test.md)
* [Blockchain Tests](running_tests/test_formats/blockchain_test.md)
* [Blockchain Engine Tests](running_tests/test_formats/blockchain_test_engine.md)
- * [Blockchain Engine Reorg Tests](running_tests/test_formats/blockchain_test_engine_reorg.md)
+ * [Blockchain Engine X Tests](running_tests/test_formats/blockchain_test_engine_x.md)
* [EOF Tests](running_tests/test_formats/eof_test.md)
* [Transaction Tests](running_tests/test_formats/transaction_test.md)
* [Common Types](running_tests/test_formats/common_types.md)
diff --git a/docs/running_tests/test_formats/blockchain_test_engine_reorg.md b/docs/running_tests/test_formats/blockchain_test_engine_x.md
similarity index 56%
rename from docs/running_tests/test_formats/blockchain_test_engine_reorg.md
rename to docs/running_tests/test_formats/blockchain_test_engine_x.md
index 863b073fd87..daa9004662a 100644
--- a/docs/running_tests/test_formats/blockchain_test_engine_reorg.md
+++ b/docs/running_tests/test_formats/blockchain_test_engine_x.md
@@ -1,30 +1,30 @@
-# Blockchain Engine Reorg Tests
+# Blockchain Engine X Tests
-The Blockchain Engine Reorg Test fixture format tests are included in the fixtures subdirectory `blockchain_tests_engine_reorg`, and use Engine API directives with optimized shared pre-allocation for improved execution performance.
+The Blockchain Engine X Test fixture format tests are included in the fixtures subdirectory `blockchain_tests_engine_x`, and use Engine API directives with optimized pre-allocation groups for improved execution performance.
-These are produced by the `StateTest` and `BlockchainTest` test specs when using the `--generate-shared-pre` and `--use-shared-pre` flags.
+These are produced by the `StateTest` and `BlockchainTest` test specs when using the `--generate-pre-alloc-groups` and `--use-pre-alloc-groups` flags.
## Description
-The Blockchain Engine Reorg Test fixture format is an optimized variant of the [Blockchain Engine Test](./blockchain_test_engine.md) format designed for large-scale test execution with performance optimizations.
+The Blockchain Engine X Test fixture format is an optimized variant of the [Blockchain Engine Test](./blockchain_test_engine.md) format designed for large-scale test execution with performance optimizations.
-It uses the Engine API to test block validation and consensus rules while leveraging **shared pre-allocation state** to significantly reduce test execution time and resource usage. Tests are grouped by their initial state (fork + environment + pre-allocation) and share common genesis states through blockchain reorganization.
+It uses the Engine API to test block validation and consensus rules while leveraging **pre-allocation groups** to significantly reduce test execution time and resource usage. Tests are grouped by their initial state (fork + environment + pre-allocation). Each group is executed against the same client instance using a common genesis state.
The key optimization is that **clients need only be started once per group** instead of once per test (as in the original engine fixture format), dramatically improving execution performance for large test suites.
-Instead of including large pre-allocation state in each test fixture, this format references a shared pre-allocation folder (`pre_alloc`) which includes all different pre-allocation combinations used for any test fixture group.
+Instead of including large pre-allocation state in each test fixture, this format references a pre-allocation groups folder (`pre_alloc`) which contains all different pre-allocation combinations organized by group.
-A single JSON fixture file is composed of a JSON object where each key-value pair is a different [`ReorgFixture`](#reorgfixture) test object, with the key string representing the test name.
+A single JSON fixture file is composed of a JSON object where each key-value pair is a different [`BlockchainTestEngineXFixture`](#BlockchainTestEngineXFixture) test object, with the key string representing the test name.
The JSON file path plus the test name are used as the unique test identifier.
-## Shared Pre-Allocation File
+## Pre-Allocation Groups Folder
-The `blockchain_tests_engine_reorg` directory contains a special directory `pre_alloc` that stores shared pre-allocation state file used by all tests in this format, one per pre-allocation group with the name of the pre-alloc hash. This folder is essential for test execution and must be present alongside the test fixtures.
+The `blockchain_tests_engine_x` directory contains a special directory `pre_alloc` that stores pre-allocation group files used by all tests in this format, one per pre-allocation group with the name of the pre-alloc hash. This folder is essential for test execution and must be present alongside the test fixtures.
-### Pre-Allocation File Structure
+### Pre-Allocation Group File Structure
-Each file in the `pre_alloc` folder corresponds to a pre-allocation hash to shared state groups:
+Each file in the `pre_alloc` folder corresponds to a pre-allocation group identified by a hash:
```json
{
@@ -37,28 +37,28 @@ Each file in the `pre_alloc` folder corresponds to a pre-allocation hash to shar
}
```
-#### SharedPreStateGroup Fields
+#### Pre-Allocation Group Fields
-- **`test_count`**: Number of tests sharing this pre-allocation group
-- **`pre_account_count`**: Number of accounts in the shared pre-allocation state
-- **`testIds`**: Array of test identifiers that use this shared state
+- **`test_count`**: Number of tests in this pre-allocation group
+- **`pre_account_count`**: Number of accounts in the pre-allocation group
+- **`testIds`**: Array of test identifiers that belong to this group
- **`network`**: Fork name (e.g., "Prague", "Cancun")
- **`environment`**: Complete [`Environment`](./common_types.md#environment) object with execution context
-- **`pre`**: Shared [`Alloc`](./common_types.md#alloc-mappingaddressaccount) object containing initial account states
+- **`pre`**: Pre-allocation group [`Alloc`](./common_types.md#alloc-mappingaddressaccount) object containing initial account states
## Consumption
-For each [`ReorgFixture`](#reorgfixture) test object in the JSON fixture file, perform the following steps:
+For each [`BlockchainTestEngineXFixture`](#BlockchainTestEngineXFixture) test object in the JSON fixture file, perform the following steps:
-1. **Load Shared Pre-Allocation**:
+1. **Load Pre-Allocation Group**:
- Read the appropriate file from the `pre_alloc` folder in the same directory
- - Locate the shared state group using [`preHash`](#-prehash-string)
- - Extract the `pre` allocation and `environment` from the shared group
+ - Locate the pre-allocation group using [`preHash`](#-prehash-string)
+ - Extract the `pre` allocation and `environment` from the group
2. **Initialize Client**:
- Use [`network`](#-network-fork) to configure the execution fork schedule
- - Use the shared `pre` allocation as the starting state
- - Use the shared `environment` as the execution context
+ - Use the pre-allocation group's `pre` allocation as the starting state
+ - Use the pre-allocation group's `environment` as the execution context
- Use [`genesisBlockHeader`](#-genesisblockheader-fixtureheader) as the genesis block header
3. **Execute Engine API Sequence**:
@@ -70,13 +70,13 @@ For each [`ReorgFixture`](#reorgfixture) test object in the JSON fixture file, p
4. **Verify Final State**:
- Compare the final chain head against [`lastblockhash`](#-lastblockhash-hash)
- If [`postStateDiff`](#-poststatediff-optionalalloc) is present:
- - Apply the state differences to the shared pre-allocation
+ - Apply the state differences to the pre-allocation group
- Verify the resulting state matches the client's final state
- If `post` field were present (not typical), verify it directly
## Structures
-### `ReorgFixture`
+### `BlockchainTestEngineXFixture`
#### - `network`: [`Fork`](./common_types.md#fork)
@@ -88,11 +88,11 @@ This field is going to be replaced by the value contained in `config.network`.
#### - `preHash`: `string`
-Hash identifier referencing a shared pre-allocation group in the `pre_alloc` folder. This hash uniquely identifies the combination of fork, environment, and pre-allocation state shared by multiple tests.
+Hash identifier referencing a pre-allocation group in the `pre_alloc` folder. This hash uniquely identifies the combination of fork, environment, and pre-allocation state that defines the group.
#### - `genesisBlockHeader`: [`FixtureHeader`](./blockchain_test.md#fixtureheader)
-Genesis block header. The state root in this header must match the state root calculated from the shared pre-allocation referenced by [`preHash`](#-prehash-string).
+Genesis block header. The state root in this header must match the state root calculated from the pre-allocation group referenced by [`preHash`](#-prehash-string).
#### - `engineNewPayloads`: [`List`](./common_types.md#list)`[`[`FixtureEngineNewPayload`](#fixtureenginenewpayload)`]`
@@ -100,7 +100,7 @@ List of `engine_newPayloadVX` directives to be processed after the genesis block
#### - `syncPayload`: [`Optional`](./common_types.md#optional)`[`[`FixtureEngineNewPayload`](#fixtureenginenewpayload)`]`
-Optional synchronization payload used for blockchain reorganization scenarios. When present, this payload is typically used to sync the chain to a specific state before or after the main payload sequence.
+Optional synchronization payload. When present, this payload is typically used to sync the chain to a specific state before or after the main payload sequence.
#### - `lastblockhash`: [`Hash`](./common_types.md#hash)
@@ -108,11 +108,11 @@ Hash of the last valid block after all payloads have been processed, or the gene
#### - `postStateDiff`: [`Optional`](./common_types.md#optional)`[`[`Alloc`](./common_types.md#alloc-mappingaddressaccount)`]`
-State differences from the shared pre-allocation state after test execution. This optimization stores only the accounts that changed, were created, or were deleted during test execution, rather than the complete final state.
+State differences from the pre-allocation group after test execution. This optimization stores only the accounts that changed, were created, or were deleted during test execution, rather than the complete final state.
To reconstruct the final state:
-1. Start with the shared pre-allocation from the `pre_alloc` folder
+1. Start with the pre-allocation group from the `pre_alloc` folder
2. Apply the changes in `postStateDiff`:
- **Modified accounts**: Replace existing accounts with new values
- **New accounts**: Add accounts not present in pre-allocation
@@ -120,7 +120,7 @@ To reconstruct the final state:
#### - `config`: [`FixtureConfig`](#fixtureconfig)
-Chain configuration object to be applied to the client running the blockchain engine reorg test.
+Chain configuration object to be applied to the client running the blockchain engine x test.
### `FixtureConfig`
@@ -138,8 +138,7 @@ Engine API payload structure identical to the one defined in [Blockchain Engine
## Usage Notes
-- This format is only generated when using `--generate-shared-pre` and `--use-shared-pre` flags
+- This format is only generated when using `--generate-pre-alloc-groups` and `--use-pre-alloc-groups` flags
- The `pre_alloc` folder is essential and must be distributed with the test fixtures
- Tests are grouped by identical (fork + environment + pre-allocation) combinations
- The format is optimized for Engine API testing (post-Paris forks)
-- Reorganization scenarios are supported through the `forkChoiceUpdate` mechanism
diff --git a/pyproject.toml b/pyproject.toml
index 99b703c18d2..3839bbc51dd 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -20,7 +20,7 @@ classifiers = [
dependencies = [
"click>=8.1.0,<9",
"ethereum-execution==1.17.0rc6.dev1",
- "hive.py @ git+https://github.com/marioevz/hive.py",
+ "hive-py",
"ethereum-spec-evm-resolver",
"gitpython>=3.1.31,<4",
"PyJWT>=2.3.0,<3",
@@ -148,3 +148,4 @@ ignore-words-list = "ingenuous"
[tool.uv.sources]
ethereum-spec-evm-resolver = { git = "https://github.com/petertdavies/ethereum-spec-evm-resolver", rev = "0e5609737ce4f86dc98cca1a5cf0eb64b8cddef2" }
+hive-py = { git = "https://github.com/marioevz/hive.py", rev = "582703e2f94b4d5e61ae495d90d684852c87a580" }
diff --git a/pytest-framework.ini b/pytest-framework.ini
index 03e001e1beb..2842bb7838b 100644
--- a/pytest-framework.ini
+++ b/pytest-framework.ini
@@ -13,7 +13,7 @@ addopts =
--ignore=src/pytest_plugins/consume/test_cache.py
--ignore=src/pytest_plugins/consume/direct/
--ignore=src/pytest_plugins/consume/direct/test_via_direct.py
- --ignore=src/pytest_plugins/consume/hive_simulators/
- --ignore=src/pytest_plugins/consume/hive_simulators/engine/test_via_engine.py
- --ignore=src/pytest_plugins/consume/hive_simulators/rlp/test_via_rlp.py
+ --ignore=src/pytest_plugins/consume/simulators/
+ --ignore=src/pytest_plugins/consume/simulators/engine/test_via_engine.py
+ --ignore=src/pytest_plugins/consume/simulators/rlp/test_via_rlp.py
--ignore=src/pytest_plugins/execute/test_recover.py
diff --git a/src/cli/extract_config.py b/src/cli/extract_config.py
index 7111af853b3..503c352eb1c 100755
--- a/src/cli/extract_config.py
+++ b/src/cli/extract_config.py
@@ -21,9 +21,9 @@
from ethereum_test_fixtures import BlockchainFixtureCommon
from ethereum_test_fixtures.blockchain import FixtureHeader
from ethereum_test_fixtures.file import Fixtures
-from ethereum_test_fixtures.shared_alloc import SharedPreStateGroup
+from ethereum_test_fixtures.pre_alloc_groups import PreAllocGroup
from ethereum_test_forks import Fork
-from pytest_plugins.consume.hive_simulators.ruleset import ruleset
+from pytest_plugins.consume.simulators.helpers.ruleset import ruleset
def get_docker_containers() -> set[str]:
@@ -111,9 +111,9 @@ def create_genesis_from_fixture(fixture_path: Path) -> Tuple[FixtureHeader, Allo
alloc = fixture.pre
chain_id = int(fixture.config.chain_id)
else:
- shared_alloc = SharedPreStateGroup.model_validate(fixture_json)
- genesis = shared_alloc.genesis # type: ignore
- alloc = shared_alloc.pre
+ pre_alloc_group = PreAllocGroup.model_validate(fixture_json)
+ genesis = pre_alloc_group.genesis # type: ignore
+ alloc = pre_alloc_group.pre
return genesis, alloc, chain_id
diff --git a/src/cli/pytest_commands/consume.py b/src/cli/pytest_commands/consume.py
index 63115ec0d60..68979607fd5 100644
--- a/src/cli/pytest_commands/consume.py
+++ b/src/cli/pytest_commands/consume.py
@@ -13,14 +13,14 @@
class ConsumeCommand(PytestCommand):
"""Pytest command for consume operations."""
- def __init__(self, command_paths: List[Path], is_hive: bool = False):
+ def __init__(self, command_paths: List[Path], is_hive: bool = False, command_name: str = ""):
"""Initialize consume command with paths and processors."""
processors: List[ArgumentProcessor] = [HelpFlagsProcessor("consume")]
if is_hive:
processors.extend(
[
- HiveEnvironmentProcessor(),
+ HiveEnvironmentProcessor(command_name=command_name),
ConsumeCommandProcessor(is_hive=True),
]
)
@@ -54,13 +54,15 @@ def get_command_paths(command_name: str, is_hive: bool) -> List[Path]:
base_path = Path("src/pytest_plugins/consume")
if command_name == "hive":
commands = ["rlp", "engine"]
+ command_paths = [base_path / "simulators" / cmd / f"test_via_{cmd}.py" for cmd in commands]
+ elif command_name in ["engine", "enginex"]:
+ command_paths = [base_path / "simulators" / "hive_tests" / "test_via_engine.py"]
+ elif command_name == "rlp":
+ command_paths = [base_path / "simulators" / "hive_tests" / "test_via_rlp.py"]
+ elif command_name == "direct":
+ command_paths = [base_path / "direct" / "test_via_direct.py"]
else:
- commands = [command_name]
-
- command_paths = [
- base_path / ("hive_simulators" if is_hive else "") / cmd / f"test_via_{cmd}.py"
- for cmd in commands
- ]
+ raise ValueError(f"Unexpected command: {command_name}.")
return command_paths
@@ -86,7 +88,7 @@ def decorator(func: Callable[..., Any]) -> click.Command:
@common_pytest_options
@functools.wraps(func)
def command(pytest_args: List[str], **kwargs) -> None:
- consume_cmd = ConsumeCommand(command_paths, is_hive)
+ consume_cmd = ConsumeCommand(command_paths, is_hive, command_name)
consume_cmd.execute(list(pytest_args))
return command
@@ -108,13 +110,45 @@ def rlp() -> None:
@consume_command(is_hive=True)
def engine() -> None:
- """Client consumes via the Engine API."""
+ """Client consumes Engine Fixtures via the Engine API."""
pass
+@consume.command(
+ name="enginex",
+ help="Client consumes Engine X Fixtures via the Engine API.",
+ context_settings={"ignore_unknown_options": True},
+)
+@click.option(
+ "--enginex-fcu-frequency",
+ type=int,
+ default=1,
+ help=(
+ "Control forkchoice update frequency for enginex simulator. "
+ "0=disable FCUs, 1=FCU every test (default), N=FCU every Nth test per "
+ "pre-allocation group."
+ ),
+)
+@common_pytest_options
+def enginex(enginex_fcu_frequency: int, pytest_args: List[str], **_kwargs) -> None:
+ """Client consumes Engine X Fixtures via the Engine API."""
+ command_name = "enginex"
+ command_paths = get_command_paths(command_name, is_hive=True)
+
+ # Validate the frequency parameter
+ if enginex_fcu_frequency < 0:
+ raise click.BadParameter("FCU frequency must be non-negative")
+
+ # Add the FCU frequency to pytest args as a custom config option
+ pytest_args_with_fcu = [f"--enginex-fcu-frequency={enginex_fcu_frequency}"] + list(pytest_args)
+
+ consume_cmd = ConsumeCommand(command_paths, is_hive=True, command_name=command_name)
+ consume_cmd.execute(pytest_args_with_fcu)
+
+
@consume_command(is_hive=True)
def hive() -> None:
- """Client consumes via all available hive methods (rlp, engine)."""
+ """Client consumes via rlp & engine hive methods."""
pass
diff --git a/src/cli/pytest_commands/fill.py b/src/cli/pytest_commands/fill.py
index 997971224ba..d9ea069ccfb 100644
--- a/src/cli/pytest_commands/fill.py
+++ b/src/cli/pytest_commands/fill.py
@@ -23,19 +23,19 @@ def __init__(self):
def create_executions(self, pytest_args: List[str]) -> List[PytestExecution]:
"""
- Create execution plan that supports two-phase shared pre-state generation.
+ Create execution plan that supports two-phase pre-allocation group generation.
Returns single execution for normal filling, or two-phase execution
- when --gen-shared-pre is specified.
+ when --generate-pre-alloc-groups is specified.
"""
processed_args = self.process_arguments(pytest_args)
# Check if we need two-phase execution
- if "--generate-shared-pre" in processed_args:
+ if "--generate-pre-alloc-groups" in processed_args:
return self._create_two_phase_executions(processed_args)
- elif "--use-shared-pre" in processed_args:
- # Only phase 2: using existing shared pre-allocation state
- return self._create_single_phase_with_shared_alloc(processed_args)
+ elif "--use-pre-alloc-groups" in processed_args:
+ # Only phase 2: using existing pre-allocation groups
+ return self._create_single_phase_with_pre_alloc_groups(processed_args)
else:
# Normal single-phase execution
return [
@@ -46,8 +46,8 @@ def create_executions(self, pytest_args: List[str]) -> List[PytestExecution]:
]
def _create_two_phase_executions(self, args: List[str]) -> List[PytestExecution]:
- """Create two-phase execution: shared allocation generation + fixture filling."""
- # Phase 1: Shared allocation generation (clean and minimal output)
+ """Create two-phase execution: pre-allocation group generation + fixture filling."""
+ # Phase 1: Pre-allocation group generation (clean and minimal output)
phase1_args = self._create_phase1_args(args)
# Phase 2: Main fixture generation (full user options)
@@ -57,7 +57,7 @@ def _create_two_phase_executions(self, args: List[str]) -> List[PytestExecution]
PytestExecution(
config_file=self.config_file,
args=phase1_args,
- description="generating shared pre-allocation state",
+ description="generating pre-allocation groups",
),
PytestExecution(
config_file=self.config_file,
@@ -66,8 +66,8 @@ def _create_two_phase_executions(self, args: List[str]) -> List[PytestExecution]
),
]
- def _create_single_phase_with_shared_alloc(self, args: List[str]) -> List[PytestExecution]:
- """Create single execution using existing shared pre-allocation state."""
+ def _create_single_phase_with_pre_alloc_groups(self, args: List[str]) -> List[PytestExecution]:
+ """Create single execution using existing pre-allocation groups."""
return [
PytestExecution(
config_file=self.config_file,
@@ -76,13 +76,13 @@ def _create_single_phase_with_shared_alloc(self, args: List[str]) -> List[Pytest
]
def _create_phase1_args(self, args: List[str]) -> List[str]:
- """Create arguments for phase 1 (shared allocation generation)."""
+ """Create arguments for phase 1 (pre-allocation group generation)."""
# Start with all args, then remove what we don't want for phase 1
filtered_args = self._remove_unwanted_phase1_args(args)
# Add required phase 1 flags (with quiet output by default)
phase1_args = [
- "--generate-shared-pre",
+ "--generate-pre-alloc-groups",
"-qq", # Quiet pytest output by default (user -v/-vv/-vvv can override)
] + filtered_args
@@ -90,10 +90,10 @@ def _create_phase1_args(self, args: List[str]) -> List[str]:
def _create_phase2_args(self, args: List[str]) -> List[str]:
"""Create arguments for phase 2 (fixture filling)."""
- # Remove --generate-shared-pre and --clean, then add --use-shared-pre
- phase2_args = self._remove_generate_shared_pre_flag(args)
+ # Remove --generate-pre-alloc-groups and --clean, then add --use-pre-alloc-groups
+ phase2_args = self._remove_generate_pre_alloc_groups_flag(args)
phase2_args = self._remove_clean_flag(phase2_args)
- phase2_args = self._add_use_shared_pre_flag(phase2_args)
+ phase2_args = self._add_use_pre_alloc_groups_flag(phase2_args)
return phase2_args
def _remove_unwanted_phase1_args(self, args: List[str]) -> List[str]:
@@ -106,9 +106,9 @@ def _remove_unwanted_phase1_args(self, args: List[str]) -> List[str]:
"--quiet",
"-qq",
"--tb",
- # Shared allocation flags (we'll add our own)
- "--generate-shared-pre",
- "--use-shared-pre",
+ # Pre-allocation group flags (we'll add our own)
+ "--generate-pre-alloc-groups",
+ "--use-pre-alloc-groups",
}
filtered_args = []
@@ -132,17 +132,17 @@ def _remove_unwanted_phase1_args(self, args: List[str]) -> List[str]:
return filtered_args
- def _remove_generate_shared_pre_flag(self, args: List[str]) -> List[str]:
- """Remove --generate-shared-pre flag from argument list."""
- return [arg for arg in args if arg != "--generate-shared-pre"]
+ def _remove_generate_pre_alloc_groups_flag(self, args: List[str]) -> List[str]:
+ """Remove --generate-pre-alloc-groups flag from argument list."""
+ return [arg for arg in args if arg != "--generate-pre-alloc-groups"]
def _remove_clean_flag(self, args: List[str]) -> List[str]:
"""Remove --clean flag from argument list."""
return [arg for arg in args if arg != "--clean"]
- def _add_use_shared_pre_flag(self, args: List[str]) -> List[str]:
- """Add --use-shared-pre flag to argument list."""
- return args + ["--use-shared-pre"]
+ def _add_use_pre_alloc_groups_flag(self, args: List[str]) -> List[str]:
+ """Add --use-pre-alloc-groups flag to argument list."""
+ return args + ["--use-pre-alloc-groups"]
class PhilCommand(FillCommand):
diff --git a/src/cli/pytest_commands/processors.py b/src/cli/pytest_commands/processors.py
index 82089cc5135..7e3cf1bf99a 100644
--- a/src/cli/pytest_commands/processors.py
+++ b/src/cli/pytest_commands/processors.py
@@ -74,6 +74,16 @@ def _is_writing_to_stdout(self, args: List[str]) -> bool:
class HiveEnvironmentProcessor(ArgumentProcessor):
"""Processes Hive environment variables for consume commands."""
+ def __init__(self, command_name: str):
+ """
+ Initialize the processor with command name to determine plugin.
+
+ Args:
+ command_name: The command name to determine which plugin to load.
+
+ """
+ self.command_name = command_name
+
def process_args(self, args: List[str]) -> List[str]:
"""Convert hive environment variables into pytest flags."""
modified_args = args[:]
@@ -94,6 +104,14 @@ def process_args(self, args: List[str]) -> List[str]:
modified_args.extend(["-p", "pytest_plugins.pytest_hive.pytest_hive"])
+ if self.command_name == "engine":
+ modified_args.extend(["-p", "pytest_plugins.consume.simulators.engine.conftest"])
+ elif self.command_name == "enginex":
+ modified_args.extend(["-p", "pytest_plugins.consume.simulators.enginex.conftest"])
+ elif self.command_name == "rlp":
+ modified_args.extend(["-p", "pytest_plugins.consume.simulators.rlp.conftest"])
+ else:
+ raise ValueError(f"Unknown command name: {self.command_name}")
return modified_args
def _has_regex_or_sim_limit(self, args: List[str]) -> bool:
diff --git a/src/cli/show_pre_alloc_group_stats.py b/src/cli/show_pre_alloc_group_stats.py
index 2533aad3e63..1f00d0d5a4e 100644
--- a/src/cli/show_pre_alloc_group_stats.py
+++ b/src/cli/show_pre_alloc_group_stats.py
@@ -1,4 +1,4 @@
-"""Script to display statistics about shared pre-allocation groups."""
+"""Script to display statistics about pre-allocation groups."""
from collections import defaultdict
from pathlib import Path
@@ -10,7 +10,7 @@
from rich.table import Table
from ethereum_test_base_types import CamelModel
-from ethereum_test_fixtures import SharedPreState
+from ethereum_test_fixtures import PreAllocGroups
def extract_test_module(test_id: str) -> str:
@@ -102,22 +102,22 @@ def calculate_size_distribution(
def analyze_pre_alloc_folder(folder: Path, verbose: int = 0) -> Dict:
"""Analyze pre-allocation folder and return statistics."""
- pre_state = SharedPreState.from_folder(folder)
+ pre_alloc_groups = PreAllocGroups.from_folder(folder)
# Basic stats
- total_groups = len(pre_state)
- total_tests = sum(group.test_count for group in pre_state.values())
- total_accounts = sum(group.pre_account_count for group in pre_state.values())
+ total_groups = len(pre_alloc_groups)
+ total_tests = sum(group.test_count for group in pre_alloc_groups.values())
+ total_accounts = sum(group.pre_account_count for group in pre_alloc_groups.values())
# Group by fork
fork_stats: Dict[str, Dict] = defaultdict(lambda: {"groups": 0, "tests": 0})
- for group in pre_state.values():
+ for group in pre_alloc_groups.values():
fork_stats[group.fork.name()]["groups"] += 1
fork_stats[group.fork.name()]["tests"] += group.test_count
# Group by test module
module_stats: Dict[str, Dict] = defaultdict(lambda: {"groups": set(), "tests": 0})
- for hash_key, group in pre_state.items():
+ for hash_key, group in pre_alloc_groups.items():
# Count tests per module in this group
module_test_count: defaultdict = defaultdict(int)
for test_id in group.test_ids:
@@ -135,7 +135,7 @@ def analyze_pre_alloc_folder(folder: Path, verbose: int = 0) -> Dict:
# Per-group details
group_details = []
- for hash_key, group in pre_state.items():
+ for hash_key, group in pre_alloc_groups.items():
group_details.append(
{
"hash": hash_key[:8] + "...", # Shortened hash for display
@@ -158,7 +158,7 @@ class SplitTestFunction(CamelModel):
split_test_functions: Dict[str, SplitTestFunction] = defaultdict(lambda: SplitTestFunction())
# Process all size-1 groups directly from pre_state
- for _hash_key, group_data in pre_state.items():
+ for _hash_key, group_data in pre_alloc_groups.items():
if group_data.test_count == 1: # Size-1 group
test_id = group_data.test_ids[0]
test_function = extract_test_function(test_id)
@@ -393,7 +393,7 @@ def display_stats(stats: Dict, console: Console, verbose: int = 0):
console.print("\n[bold yellow]Test Functions Split Across Multiple Groups[/bold yellow]")
console.print(
"[dim]These test functions create multiple size-1 groups (due to different "
- "forks/parameters), preventing shared pre-allocation optimization:[/dim]",
+ "forks/parameters), preventing pre-allocation group optimization:[/dim]",
highlight=False,
)
@@ -452,7 +452,7 @@ def display_stats(stats: Dict, console: Console, verbose: int = 0):
@click.argument(
"pre_alloc_folder",
type=click.Path(exists=True, path_type=Path),
- default="fixtures/blockchain_tests_engine_reorg/pre_alloc",
+ default="fixtures/blockchain_tests_engine_x/pre_alloc",
)
@click.option(
"--verbose",
@@ -462,10 +462,10 @@ def display_stats(stats: Dict, console: Console, verbose: int = 0):
)
def main(pre_alloc_folder: Path, verbose: int):
"""
- Display statistics about shared pre-allocation groups.
+ Display statistics about pre-allocation groups.
This script analyzes a pre_alloc folder generated by the test framework's
- shared pre-allocation optimization feature and displays:
+ pre-allocation group optimization feature and displays:
- Total number of groups, tests, and accounts
- Number of tests and accounts per group (tabulated)
@@ -473,8 +473,8 @@ def main(pre_alloc_folder: Path, verbose: int):
- Number of groups and tests per test module (tabulated)
The pre_alloc file is generated when running tests with the
- --generate-shared-pre and --use-shared-pre flags to optimize
- test execution by sharing pre-allocation state across tests.
+ --generate-pre-alloc-groups and --use-pre-alloc-groups flags to optimize
+ test execution by grouping tests with identical pre-allocation state.
"""
console = Console()
diff --git a/src/ethereum_test_fixtures/__init__.py b/src/ethereum_test_fixtures/__init__.py
index 76d049feeb3..83ef406f254 100644
--- a/src/ethereum_test_fixtures/__init__.py
+++ b/src/ethereum_test_fixtures/__init__.py
@@ -4,14 +4,14 @@
from .blockchain import (
BlockchainEngineFixture,
BlockchainEngineFixtureCommon,
- BlockchainEngineReorgFixture,
+ BlockchainEngineXFixture,
BlockchainFixture,
BlockchainFixtureCommon,
)
from .collector import FixtureCollector, TestInfo
from .consume import FixtureConsumer
from .eof import EOFFixture
-from .shared_alloc import SharedPreState, SharedPreStateGroup
+from .pre_alloc_groups import PreAllocGroup, PreAllocGroups
from .state import StateFixture
from .transaction import TransactionFixture
@@ -19,7 +19,7 @@
"BaseFixture",
"BlockchainEngineFixture",
"BlockchainEngineFixtureCommon",
- "BlockchainEngineReorgFixture",
+ "BlockchainEngineXFixture",
"BlockchainFixture",
"BlockchainFixtureCommon",
"EOFFixture",
@@ -27,8 +27,8 @@
"FixtureConsumer",
"FixtureFormat",
"LabeledFixtureFormat",
- "SharedPreState",
- "SharedPreStateGroup",
+ "PreAllocGroups",
+ "PreAllocGroup",
"StateFixture",
"TestInfo",
"TransactionFixture",
diff --git a/src/ethereum_test_fixtures/blockchain.py b/src/ethereum_test_fixtures/blockchain.py
index 14fc63082aa..2ed8f4faded 100644
--- a/src/ethereum_test_fixtures/blockchain.py
+++ b/src/ethereum_test_fixtures/blockchain.py
@@ -64,7 +64,7 @@ def validate_post_state_fields(self):
if mode == "after":
# Determine which fields to check
if alternate_field:
- # For reorg fixtures: check post_state vs post_state_diff
+ # For engine x fixtures: check post_state vs post_state_diff
field1_name, field2_name = "post_state", alternate_field
else:
# For standard fixtures: check post_state vs post_state_hash
@@ -523,7 +523,7 @@ class BlockchainEngineFixtureCommon(BaseFixture):
Base blockchain test fixture model for Engine API based execution.
Similar to BlockchainFixtureCommon but excludes the 'pre' field to avoid
- duplicating large pre-allocations when using shared genesis approaches.
+ duplicating large pre-allocations.
"""
fork: Fork = Field(..., alias="network")
@@ -560,22 +560,19 @@ class BlockchainEngineFixture(BlockchainEngineFixtureCommon):
@post_state_validator(alternate_field="post_state_diff")
-class BlockchainEngineReorgFixture(BlockchainEngineFixtureCommon):
+class BlockchainEngineXFixture(BlockchainEngineFixtureCommon):
"""
- Engine reorg specific test fixture information.
+ Engine X specific test fixture information.
- Uses shared pre-allocations and blockchain reorganization for efficient
+ Uses pre-allocation groups (and a single client instance) for efficient
test execution without client restarts.
"""
- format_name: ClassVar[str] = "blockchain_test_engine_reorg"
- description: ClassVar[str] = (
- "Tests that generate a blockchain test fixture for use with a shared pre-state and engine "
- "reorg execution."
- )
+ format_name: ClassVar[str] = "blockchain_test_engine_x"
+ description: ClassVar[str] = "Tests that generate a Blockchain Test Engine X fixture."
pre_hash: str
- """Hash of the shared pre-allocation group this test belongs to."""
+ """Hash of the pre-allocation group this test belongs to."""
post_state_diff: Alloc | None = None
"""State difference from genesis after test execution (efficiency optimization)."""
diff --git a/src/ethereum_test_fixtures/shared_alloc.py b/src/ethereum_test_fixtures/pre_alloc_groups.py
similarity index 70%
rename from src/ethereum_test_fixtures/shared_alloc.py
rename to src/ethereum_test_fixtures/pre_alloc_groups.py
index 5b098d0a5cc..afd004fda1f 100644
--- a/src/ethereum_test_fixtures/shared_alloc.py
+++ b/src/ethereum_test_fixtures/pre_alloc_groups.py
@@ -1,4 +1,4 @@
-"""Shared pre-allocation models for test fixture generation."""
+"""Pre-allocation group models for test fixture generation."""
from pathlib import Path
from typing import Any, Dict, List
@@ -13,12 +13,12 @@
from .blockchain import FixtureHeader
-class SharedPreStateGroup(CamelModel):
+class PreAllocGroup(CamelModel):
"""
- Shared pre-state group for tests with identical Environment and fork values.
+ Pre-allocation group for tests with identical Environment and fork values.
Groups tests by a hash of their fixture Environment and fork to enable
- shared pre-allocation optimization.
+ pre-allocation group optimization.
"""
model_config = {"populate_by_name": True} # Allow both field names and aliases
@@ -49,45 +49,43 @@ def genesis(self) -> FixtureHeader:
)
def to_file(self, file: Path) -> None:
- """Save SharedPreStateGroup to a file."""
+ """Save PreAllocGroup to a file."""
lock_file_path = file.with_suffix(".lock")
with FileLock(lock_file_path):
if file.exists():
with open(file, "r") as f:
- previous_shared_pre_state_group = SharedPreStateGroup.model_validate_json(
- f.read()
- )
- for account in previous_shared_pre_state_group.pre:
+ previous_pre_alloc_group = PreAllocGroup.model_validate_json(f.read())
+ for account in previous_pre_alloc_group.pre:
if account not in self.pre:
- self.pre[account] = previous_shared_pre_state_group.pre[account]
- self.pre_account_count += previous_shared_pre_state_group.pre_account_count
- self.test_count += previous_shared_pre_state_group.test_count
- self.test_ids.extend(previous_shared_pre_state_group.test_ids)
+ self.pre[account] = previous_pre_alloc_group.pre[account]
+ self.pre_account_count += previous_pre_alloc_group.pre_account_count
+ self.test_count += previous_pre_alloc_group.test_count
+ self.test_ids.extend(previous_pre_alloc_group.test_ids)
with open(file, "w") as f:
f.write(self.model_dump_json(by_alias=True, exclude_none=True, indent=2))
-class SharedPreState(EthereumTestRootModel):
- """Root model mapping pre-state hashes to test groups."""
+class PreAllocGroups(EthereumTestRootModel):
+ """Root model mapping pre-allocation group hashes to test groups."""
- root: Dict[str, SharedPreStateGroup]
+ root: Dict[str, PreAllocGroup]
def __setitem__(self, key: str, value: Any):
"""Set item in root dict."""
self.root[key] = value
@classmethod
- def from_folder(cls, folder: Path) -> "SharedPreState":
- """Create SharedPreState from a folder of pre-allocation files."""
+ def from_folder(cls, folder: Path) -> "PreAllocGroups":
+ """Create PreAllocGroups from a folder of pre-allocation files."""
data = {}
for file in folder.glob("*.json"):
with open(file) as f:
- data[file.stem] = SharedPreStateGroup.model_validate_json(f.read())
+ data[file.stem] = PreAllocGroup.model_validate_json(f.read())
return cls(root=data)
def to_folder(self, folder: Path) -> None:
- """Save SharedPreState to a folder of pre-allocation files."""
+ """Save PreAllocGroups to a folder of pre-allocation files."""
for key, value in self.root.items():
value.to_file(folder / f"{key}.json")
diff --git a/src/ethereum_test_specs/base.py b/src/ethereum_test_specs/base.py
index ff4f11e071b..c5613dc22ae 100644
--- a/src/ethereum_test_specs/base.py
+++ b/src/ethereum_test_specs/base.py
@@ -17,8 +17,8 @@
BaseFixture,
FixtureFormat,
LabeledFixtureFormat,
- SharedPreState,
- SharedPreStateGroup,
+ PreAllocGroup,
+ PreAllocGroups,
)
from ethereum_test_forks import Fork
from ethereum_test_types import Alloc, Environment, Withdrawal
@@ -209,29 +209,29 @@ def check_exception_test(
def get_genesis_environment(self, fork: Fork) -> Environment:
"""
- Get the genesis environment for shared pre-allocation.
+ Get the genesis environment for pre-allocation groups.
Must be implemented by subclasses to provide the appropriate environment.
"""
raise NotImplementedError(
- f"{self.__class__.__name__} must implement genesis environment access for shared "
- "pre-allocation"
+ f"{self.__class__.__name__} must implement genesis environment access for use with "
+ "pre-allocation groups."
)
- def update_shared_pre_state(
- self, shared_pre_state: SharedPreState, fork: Fork, test_id: str
- ) -> SharedPreState:
- """Create or update the shared pre-state group with the pre from the current spec."""
+ def update_pre_alloc_groups(
+ self, pre_alloc_groups: PreAllocGroups, fork: Fork, test_id: str
+ ) -> PreAllocGroups:
+ """Create or update the pre-allocation group with the pre from the current spec."""
if not hasattr(self, "pre"):
raise AttributeError(
- f"{self.__class__.__name__} does not have a 'pre' field. Shared pre-allocation "
- "is only supported for test types that define pre-allocation."
+ f"{self.__class__.__name__} does not have a 'pre' field. Pre-allocation groups "
+ "are only supported for test types that define pre-allocation."
)
- pre_alloc_hash = self.compute_shared_pre_alloc_hash(fork=fork)
+ pre_alloc_hash = self.compute_pre_alloc_group_hash(fork=fork)
- if pre_alloc_hash in shared_pre_state:
+ if pre_alloc_hash in pre_alloc_groups:
# Update existing group - just merge pre-allocations
- group = shared_pre_state[pre_alloc_hash]
+ group = pre_alloc_groups[pre_alloc_hash]
group.pre = Alloc.merge(
group.pre,
self.pre,
@@ -241,10 +241,10 @@ def update_shared_pre_state(
group.test_ids.append(str(test_id))
group.test_count = len(group.test_ids)
group.pre_account_count = len(group.pre.root)
- shared_pre_state[pre_alloc_hash] = group
+ pre_alloc_groups[pre_alloc_hash] = group
else:
# Create new group - use Environment instead of expensive genesis generation
- group = SharedPreStateGroup(
+ group = PreAllocGroup(
test_count=1,
pre_account_count=len(self.pre.root),
test_ids=[str(test_id)],
@@ -252,15 +252,15 @@ def update_shared_pre_state(
environment=self.get_genesis_environment(fork),
pre=self.pre,
)
- shared_pre_state[pre_alloc_hash] = group
- return shared_pre_state
+ pre_alloc_groups[pre_alloc_hash] = group
+ return pre_alloc_groups
- def compute_shared_pre_alloc_hash(self, fork: Fork) -> str:
- """Hash (fork, env) in order to group tests by shared genesis config."""
+ def compute_pre_alloc_group_hash(self, fork: Fork) -> str:
+ """Hash (fork, env) in order to group tests by genesis config."""
if not hasattr(self, "pre"):
raise AttributeError(
- f"{self.__class__.__name__} does not have a 'pre' field. Shared pre-allocation "
- "is only supported for test types that define pre-allocation."
+ f"{self.__class__.__name__} does not have a 'pre' field. Pre-allocation group "
+ "usage is only supported for test types that define pre-allocs."
)
fork_digest = hashlib.sha256(fork.name().encode("utf-8")).digest()
fork_hash = int.from_bytes(fork_digest[:8], byteorder="big")
diff --git a/src/ethereum_test_specs/blockchain.py b/src/ethereum_test_specs/blockchain.py
index dba3e1c1c8e..a2091995728 100644
--- a/src/ethereum_test_specs/blockchain.py
+++ b/src/ethereum_test_specs/blockchain.py
@@ -28,7 +28,7 @@
from ethereum_test_fixtures import (
BaseFixture,
BlockchainEngineFixture,
- BlockchainEngineReorgFixture,
+ BlockchainEngineXFixture,
BlockchainFixture,
FixtureFormat,
LabeledFixtureFormat,
@@ -308,7 +308,7 @@ class BlockchainTest(BaseTest):
supported_fixture_formats: ClassVar[Sequence[FixtureFormat | LabeledFixtureFormat]] = [
BlockchainFixture,
BlockchainEngineFixture,
- BlockchainEngineReorgFixture,
+ BlockchainEngineXFixture,
]
supported_execute_formats: ClassVar[Sequence[LabeledExecuteFormat]] = [
LabeledExecuteFormat(
@@ -630,7 +630,7 @@ def make_hive_fixture(
t8n: TransitionTool,
fork: Fork,
fixture_format: FixtureFormat = BlockchainEngineFixture,
- ) -> BlockchainEngineFixture | BlockchainEngineReorgFixture:
+ ) -> BlockchainEngineFixture | BlockchainEngineXFixture:
"""Create a hive fixture from the blocktest definition."""
fixture_payloads: List[FixtureEngineNewPayload] = []
@@ -722,8 +722,8 @@ def make_hive_fixture(
}
# Add format-specific fields
- if fixture_format == BlockchainEngineReorgFixture:
- # For reorg format, exclude pre (will be provided via shared state)
+ if fixture_format == BlockchainEngineXFixture:
+ # For Engine X format, exclude pre (will be provided via shared state)
# and prepare for state diff optimization
fixture_data.update(
{
@@ -733,7 +733,7 @@ def make_hive_fixture(
"pre_hash": "", # Will be set by BaseTestWrapper
}
)
- return BlockchainEngineReorgFixture(**fixture_data)
+ return BlockchainEngineXFixture(**fixture_data)
else:
# Standard engine fixture
fixture_data.update(
@@ -747,7 +747,7 @@ def make_hive_fixture(
return BlockchainEngineFixture(**fixture_data)
def get_genesis_environment(self, fork: Fork) -> Environment:
- """Get the genesis environment for shared pre-allocation."""
+ """Get the genesis environment for pre-allocation groups."""
return self.genesis_environment
def generate(
@@ -760,7 +760,7 @@ def generate(
t8n.reset_traces()
if fixture_format == BlockchainEngineFixture:
return self.make_hive_fixture(t8n, fork, fixture_format)
- elif fixture_format == BlockchainEngineReorgFixture:
+ elif fixture_format == BlockchainEngineXFixture:
return self.make_hive_fixture(t8n, fork, fixture_format)
elif fixture_format == BlockchainFixture:
return self.make_fixture(t8n, fork)
diff --git a/src/ethereum_test_specs/state.py b/src/ethereum_test_specs/state.py
index 903272726be..3bb63f762d3 100644
--- a/src/ethereum_test_specs/state.py
+++ b/src/ethereum_test_specs/state.py
@@ -234,7 +234,7 @@ def make_state_test_fixture(
)
def get_genesis_environment(self, fork: Fork) -> Environment:
- """Get the genesis environment for shared pre-allocation."""
+ """Get the genesis environment for pre-allocation groups."""
return self._generate_blockchain_genesis_environment(fork=fork)
def generate(
diff --git a/src/pytest_plugins/consume/consume.py b/src/pytest_plugins/consume/consume.py
index b2e797ef5b1..36c8fd4eeac 100644
--- a/src/pytest_plugins/consume/consume.py
+++ b/src/pytest_plugins/consume/consume.py
@@ -1,12 +1,20 @@
-"""A pytest plugin providing common functionality for consuming test fixtures."""
+"""
+A pytest plugin providing common functionality for consuming test fixtures.
+Features:
+- Downloads and caches test fixtures from various sources (local, URL, release).
+- Manages test case generation from fixture files.
+- Provides xdist load balancing for large pre-allocation groups (enginex simulator).
+"""
+
+import logging
import re
import sys
import tarfile
from dataclasses import dataclass
from io import BytesIO
from pathlib import Path
-from typing import List, Tuple
+from typing import Dict, List, Tuple
from urllib.parse import urlparse
import platformdirs
@@ -22,11 +30,118 @@
from .releases import ReleaseTag, get_release_page_url, get_release_url, is_release_url, is_url
+logger = logging.getLogger(__name__)
+
CACHED_DOWNLOADS_DIRECTORY = (
Path(platformdirs.user_cache_dir("ethereum-execution-spec-tests")) / "cached_downloads"
)
+class XDistGroupMapper:
+ """
+ Maps test cases to xdist groups, splitting large pre-allocation groups into sub-groups.
+
+ This class helps improve load balancing when using pytest-xdist with --dist=loadgroup
+ by breaking up large pre-allocation groups (e.g., 1000+ tests) into smaller virtual
+ sub-groups while maintaining the constraint that tests from the same pre-allocation
+ group must run on the same worker.
+ """
+
+ def __init__(self, max_group_size: int = 400):
+ """Initialize the mapper with a maximum group size."""
+ self.max_group_size = max_group_size
+ self.group_sizes: Dict[str, int] = {}
+ self.test_to_subgroup: Dict[str, int] = {}
+ self._built = False
+
+ def build_mapping(self, test_cases: TestCases) -> None:
+ """
+ Build the mapping of test cases to sub-groups.
+
+ This analyzes all test cases and determines which pre-allocation groups
+ need to be split into sub-groups based on the max_group_size.
+ """
+ if self._built:
+ return
+
+ # Count tests per pre-allocation group
+ for test_case in test_cases:
+ if hasattr(test_case, "pre_hash") and test_case.pre_hash:
+ pre_hash = test_case.pre_hash
+ self.group_sizes[pre_hash] = self.group_sizes.get(pre_hash, 0) + 1
+
+ # Assign sub-groups for large groups
+ group_counters: Dict[str, int] = {}
+ for test_case in test_cases:
+ if hasattr(test_case, "pre_hash") and test_case.pre_hash:
+ pre_hash = test_case.pre_hash
+ group_size = self.group_sizes[pre_hash]
+
+ if group_size <= self.max_group_size:
+ # Small group, no sub-group needed
+ self.test_to_subgroup[test_case.id] = 0
+ else:
+ # Large group, assign to sub-group using round-robin
+ counter = group_counters.get(pre_hash, 0)
+ sub_group = counter // self.max_group_size
+ self.test_to_subgroup[test_case.id] = sub_group
+ group_counters[pre_hash] = counter + 1
+
+ self._built = True
+
+ # Log summary of large groups
+ large_groups = [
+ (pre_hash, size)
+ for pre_hash, size in self.group_sizes.items()
+ if size > self.max_group_size
+ ]
+ if large_groups:
+ logger.info(
+ f"Found {len(large_groups)} pre-allocation groups larger than "
+ f"{self.max_group_size} tests that will be split for better load balancing"
+ )
+
+ def get_xdist_group_name(self, test_case) -> str:
+ """
+ Get the xdist group name for a test case.
+
+ For small groups, returns the pre_hash as-is.
+ For large groups, returns "{pre_hash}:{sub_group_index}".
+ """
+ if not hasattr(test_case, "pre_hash") or not test_case.pre_hash:
+ # No pre_hash, use test ID as fallback
+ return test_case.id
+
+ pre_hash = test_case.pre_hash
+ group_size = self.group_sizes.get(pre_hash, 0)
+
+ if group_size <= self.max_group_size:
+ # Small group, use pre_hash as-is
+ return pre_hash
+
+ # Large group, include sub-group index
+ sub_group = self.test_to_subgroup.get(test_case.id, 0)
+ return f"{pre_hash}:{sub_group}"
+
+ def get_split_statistics(self) -> Dict[str, Dict[str, int]]:
+ """
+ Get statistics about how groups were split.
+
+ Returns a dict with information about each pre-allocation group
+ and how many sub-groups it was split into.
+ """
+ stats = {}
+ for pre_hash, size in self.group_sizes.items():
+ if size > self.max_group_size:
+ num_subgroups = (size + self.max_group_size - 1) // self.max_group_size
+ stats[pre_hash] = {
+ "total_tests": size,
+ "num_subgroups": num_subgroups,
+ "tests_per_subgroup": size // num_subgroups,
+ }
+ return stats
+
+
def default_input() -> str:
"""
Directory (default) to consume generated test fixtures from. Defined as a
@@ -346,6 +461,31 @@ def pytest_configure(config): # noqa: D103
index = IndexFile.model_validate_json(index_file.read_text())
config.test_cases = index.test_cases
+ # Create XDistGroupMapper for enginex simulator if needed
+ # Check if enginex options are present (indicates enginex simulator is being used)
+ try:
+ max_group_size = config.getoption("--enginex-max-group-size", None)
+ if max_group_size is not None:
+ config.xdist_group_mapper = XDistGroupMapper(max_group_size)
+ config.xdist_group_mapper.build_mapping(config.test_cases)
+
+ # Log statistics about group splitting
+ split_stats = config.xdist_group_mapper.get_split_statistics()
+ if split_stats:
+ rich.print("[bold yellow]Pre-allocation group splitting for load balancing:[/]")
+ for pre_hash, stats in split_stats.items():
+ rich.print(
+ f" Group {pre_hash[:8]}: {stats['total_tests']} tests → "
+ f"{stats['num_subgroups']} sub-groups "
+ f"(~{stats['tests_per_subgroup']} tests each)"
+ )
+ rich.print(f" Max group size: {max_group_size}")
+ else:
+ config.xdist_group_mapper = None
+ except ValueError:
+ # enginex options not available, not using enginex simulator
+ config.xdist_group_mapper = None
+
for fixture_format in BaseFixture.formats.values():
config.addinivalue_line(
"markers",
@@ -417,16 +557,35 @@ def pytest_generate_tests(metafunc):
return
test_cases = metafunc.config.test_cases
+ xdist_group_mapper = getattr(metafunc.config, "xdist_group_mapper", None)
param_list = []
for test_case in test_cases:
if test_case.format.format_name not in metafunc.config._supported_fixture_formats:
continue
fork_markers = get_relative_fork_markers(test_case.fork, strict_mode=False)
+
+ # Append pre_hash (first 8 chars) to test ID for easier selection with --sim.limit
+ test_id = test_case.id
+ if hasattr(test_case, "pre_hash") and test_case.pre_hash:
+ test_id = f"{test_case.id}[{test_case.pre_hash[:8]}]"
+
+ # Determine xdist group name
+ if xdist_group_mapper and hasattr(test_case, "pre_hash") and test_case.pre_hash:
+ # Use the mapper to get potentially split group name
+ xdist_group_name = xdist_group_mapper.get_xdist_group_name(test_case)
+ elif hasattr(test_case, "pre_hash") and test_case.pre_hash:
+ # No mapper or not enginex, use pre_hash directly
+ xdist_group_name = test_case.pre_hash
+ else:
+ # No pre_hash, use test ID
+ xdist_group_name = test_case.id
+
param = pytest.param(
test_case,
- id=test_case.id,
+ id=test_id,
marks=[getattr(pytest.mark, m) for m in fork_markers]
- + [getattr(pytest.mark, test_case.format.format_name)],
+ + [getattr(pytest.mark, test_case.format.format_name)]
+ + [pytest.mark.xdist_group(name=xdist_group_name)],
)
param_list.append(param)
diff --git a/src/pytest_plugins/consume/hive_simulators/conftest.py b/src/pytest_plugins/consume/hive_simulators/conftest.py
deleted file mode 100644
index 4d0c92fc6ca..00000000000
--- a/src/pytest_plugins/consume/hive_simulators/conftest.py
+++ /dev/null
@@ -1,430 +0,0 @@
-"""Common pytest fixtures for the RLP and Engine simulators."""
-
-import io
-import json
-import logging
-import textwrap
-import urllib
-import warnings
-from pathlib import Path
-from typing import Dict, Generator, List, Literal, cast
-
-import pytest
-import rich
-from hive.client import Client, ClientType
-from hive.testing import HiveTest
-
-from ethereum_test_base_types import Number, to_json
-from ethereum_test_exceptions import ExceptionMapper
-from ethereum_test_fixtures import (
- BaseFixture,
- BlockchainFixtureCommon,
-)
-from ethereum_test_fixtures.consume import TestCaseIndexFile, TestCaseStream
-from ethereum_test_fixtures.file import Fixtures
-from ethereum_test_rpc import EthRPC
-from pytest_plugins.consume.consume import FixturesSource
-from pytest_plugins.consume.hive_simulators.ruleset import ruleset # TODO: generate dynamically
-from pytest_plugins.pytest_hive.hive_info import ClientFile, HiveInfo
-
-from .exceptions import EXCEPTION_MAPPERS
-from .timing import TimingData
-
-logger = logging.getLogger(__name__)
-
-
-def pytest_addoption(parser):
- """Hive simulator specific consume command line options."""
- consume_group = parser.getgroup(
- "consume", "Arguments related to consuming fixtures via a client"
- )
- consume_group.addoption(
- "--timing-data",
- action="store_true",
- dest="timing_data",
- default=False,
- help="Log the timing data for each test case execution.",
- )
- consume_group.addoption(
- "--disable-strict-exception-matching",
- action="store",
- dest="disable_strict_exception_matching",
- default="",
- help=(
- "Comma-separated list of client names and/or forks which should NOT use strict "
- "exception matching."
- ),
- )
-
-
-@pytest.fixture(scope="function")
-def eth_rpc(client: Client) -> EthRPC:
- """Initialize ethereum RPC client for the execution client under test."""
- return EthRPC(f"http://{client.ip}:8545")
-
-
-@pytest.fixture(scope="function")
-def hive_clients_yaml_target_filename() -> str:
- """Return the name of the target clients YAML file."""
- return "clients_eest.yaml"
-
-
-@pytest.fixture(scope="function")
-def hive_clients_yaml_generator_command(
- client_type: ClientType,
- client_file: ClientFile,
- hive_clients_yaml_target_filename: str,
- hive_info: HiveInfo,
-) -> str:
- """Generate a shell command that creates a clients YAML file for the current client."""
- try:
- if not client_file:
- raise ValueError("No client information available - try updating hive")
- client_config = [c for c in client_file.root if c.client in client_type.name]
- if not client_config:
- raise ValueError(f"Client '{client_type.name}' not found in client file")
- try:
- yaml_content = ClientFile(root=[client_config[0]]).yaml().replace(" ", " ")
- return f'echo "\\\n{yaml_content}" > {hive_clients_yaml_target_filename}'
- except Exception as e:
- raise ValueError(f"Failed to generate YAML: {str(e)}") from e
- except ValueError as e:
- error_message = str(e)
- warnings.warn(
- f"{error_message}. The Hive clients YAML generator command will not be available.",
- stacklevel=2,
- )
-
- issue_title = f"Client {client_type.name} configuration issue"
- issue_body = f"Error: {error_message}\nHive version: {hive_info.commit}\n"
- issue_url = f"https://github.com/ethereum/execution-spec-tests/issues/new?title={urllib.parse.quote(issue_title)}&body={urllib.parse.quote(issue_body)}"
-
- return (
- f"Error: {error_message}\n"
- f'Please create an issue to report this problem.'
- )
-
-
-@pytest.fixture(scope="function")
-def filtered_hive_options(hive_info: HiveInfo) -> List[str]:
- """Filter Hive command options to remove unwanted options."""
- logger.info("Hive info: %s", hive_info.command)
-
- unwanted_options = [
- "--client", # gets overwritten: we specify a single client; the one from the test case
- "--client-file", # gets overwritten: we'll write our own client file
- "--results-root", # use default value instead (or you have to pass it to ./hiveview)
- "--sim.limit", # gets overwritten: we only run the current test case id
- "--sim.parallelism", # skip; we'll only be running a single test
- ]
-
- command_parts = []
- skip_next = False
- for part in hive_info.command:
- if skip_next:
- skip_next = False
- continue
-
- if part in unwanted_options:
- skip_next = True
- continue
-
- if any(part.startswith(f"{option}=") for option in unwanted_options):
- continue
-
- command_parts.append(part)
-
- return command_parts
-
-
-@pytest.fixture(scope="function")
-def hive_client_config_file_parameter(hive_clients_yaml_target_filename: str) -> str:
- """Return the hive client config file parameter."""
- return f"--client-file {hive_clients_yaml_target_filename}"
-
-
-@pytest.fixture(scope="function")
-def hive_consume_command(
- test_case: TestCaseIndexFile | TestCaseStream,
- hive_client_config_file_parameter: str,
- filtered_hive_options: List[str],
- client_type: ClientType,
-) -> str:
- """Command to run the test within hive."""
- command_parts = filtered_hive_options.copy()
- command_parts.append(f"{hive_client_config_file_parameter}")
- command_parts.append(f"--client={client_type.name}")
- command_parts.append(f'--sim.limit="id:{test_case.id}"')
-
- return " ".join(command_parts)
-
-
-@pytest.fixture(scope="function")
-def hive_dev_command(
- client_type: ClientType,
- hive_client_config_file_parameter: str,
-) -> str:
- """Return the command used to instantiate hive alongside the `consume` command."""
- return f"./hive --dev {hive_client_config_file_parameter} --client {client_type.name}"
-
-
-@pytest.fixture(scope="function")
-def eest_consume_command(
- test_suite_name: str,
- test_case: TestCaseIndexFile | TestCaseStream,
- fixture_source_flags: List[str],
-) -> str:
- """Commands to run the test within EEST using a hive dev back-end."""
- flags = " ".join(fixture_source_flags)
- return (
- f"uv run consume {test_suite_name.split('-')[-1]} "
- f'{flags} --sim.limit="id:{test_case.id}" -v -s'
- )
-
-
-@pytest.fixture(scope="function")
-def test_case_description(
- fixture: BaseFixture,
- test_case: TestCaseIndexFile | TestCaseStream,
- hive_clients_yaml_generator_command: str,
- hive_consume_command: str,
- hive_dev_command: str,
- eest_consume_command: str,
-) -> str:
- """Create the description of the current blockchain fixture test case."""
- test_url = fixture.info.get("url", "")
-
- if "description" not in fixture.info or fixture.info["description"] is None:
- test_docstring = "No documentation available."
- else:
- # this prefix was included in the fixture description field for fixtures <= v4.3.0
- test_docstring = fixture.info["description"].replace("Test function documentation:\n", "") # type: ignore
-
- description = textwrap.dedent(f"""
- Test Details
- {test_case.id}
- {f'[source]' if test_url else ""}
-
- {test_docstring}
-
- Run This Test Locally:
- To run this test in hive:
- {hive_clients_yaml_generator_command}
- {hive_consume_command}
-
- Advanced: Run the test against a hive developer backend using EEST's consume
command
- Create the client YAML file, as above, then:
- 1. Start hive in dev mode: {hive_dev_command}
- 2. In the EEST repository root: {eest_consume_command}
- """) # noqa: E501
-
- description = description.strip()
- description = description.replace("\n", "
")
- return description
-
-
-@pytest.fixture(scope="function", autouse=True)
-def total_timing_data(request) -> Generator[TimingData, None, None]:
- """Record timing data for various stages of executing test case."""
- with TimingData("Total (seconds)") as total_timing_data:
- yield total_timing_data
- if request.config.getoption("timing_data"):
- rich.print(f"\n{total_timing_data.formatted()}")
- if hasattr(request.node, "rep_call"): # make available for test reports
- request.node.rep_call.timings = total_timing_data
-
-
-@pytest.fixture(scope="function")
-def client_genesis(fixture: BlockchainFixtureCommon) -> dict:
- """Convert the fixture genesis block header and pre-state to a client genesis state."""
- genesis = to_json(fixture.genesis)
- alloc = to_json(fixture.pre)
- # NOTE: nethermind requires account keys without '0x' prefix
- genesis["alloc"] = {k.replace("0x", ""): v for k, v in alloc.items()}
- return genesis
-
-
-@pytest.fixture(scope="function")
-def check_live_port(test_suite_name: str) -> Literal[8545, 8551]:
- """Port used by hive to check for liveness of the client."""
- if test_suite_name == "eest/consume-rlp":
- return 8545
- elif test_suite_name == "eest/consume-engine":
- return 8551
- raise ValueError(
- f"Unexpected test suite name '{test_suite_name}' while setting HIVE_CHECK_LIVE_PORT."
- )
-
-
-@pytest.fixture(scope="function")
-def environment(
- fixture: BlockchainFixtureCommon,
- check_live_port: Literal[8545, 8551],
-) -> dict:
- """Define the environment that hive will start the client with."""
- assert fixture.fork in ruleset, f"fork '{fixture.fork}' missing in hive ruleset"
- return {
- "HIVE_CHAIN_ID": str(Number(fixture.config.chain_id)),
- "HIVE_FORK_DAO_VOTE": "1",
- "HIVE_NODETYPE": "full",
- "HIVE_CHECK_LIVE_PORT": str(check_live_port),
- **{k: f"{v:d}" for k, v in ruleset[fixture.fork].items()},
- }
-
-
-@pytest.fixture(scope="function")
-def buffered_genesis(client_genesis: dict) -> io.BufferedReader:
- """Create a buffered reader for the genesis block header of the current test fixture."""
- genesis_json = json.dumps(client_genesis)
- genesis_bytes = genesis_json.encode("utf-8")
- return io.BufferedReader(cast(io.RawIOBase, io.BytesIO(genesis_bytes)))
-
-
-@pytest.fixture(scope="session")
-def client_exception_mapper_cache():
- """Cache for exception mappers by client type."""
- return {}
-
-
-@pytest.fixture(scope="function")
-def client_exception_mapper(
- client_type: ClientType, client_exception_mapper_cache
-) -> ExceptionMapper | None:
- """Return the exception mapper for the client type, with caching."""
- if client_type.name not in client_exception_mapper_cache:
- for client in EXCEPTION_MAPPERS:
- if client in client_type.name:
- client_exception_mapper_cache[client_type.name] = EXCEPTION_MAPPERS[client]
- break
- else:
- client_exception_mapper_cache[client_type.name] = None
-
- return client_exception_mapper_cache[client_type.name]
-
-
-@pytest.fixture(scope="session")
-def disable_strict_exception_matching(request: pytest.FixtureRequest) -> List[str]:
- """Return the list of clients or forks that should NOT use strict exception matching."""
- config_string = request.config.getoption("disable_strict_exception_matching")
- return config_string.split(",") if config_string else []
-
-
-@pytest.fixture(scope="function")
-def client_strict_exception_matching(
- client_type: ClientType,
- disable_strict_exception_matching: List[str],
-) -> bool:
- """Return True if the client type should use strict exception matching."""
- return not any(
- client.lower() in client_type.name.lower() for client in disable_strict_exception_matching
- )
-
-
-@pytest.fixture(scope="function")
-def fork_strict_exception_matching(
- fixture: BlockchainFixtureCommon,
- disable_strict_exception_matching: List[str],
-) -> bool:
- """Return True if the fork should use strict exception matching."""
- # NOTE: `in` makes it easier for transition forks ("Prague" in "CancunToPragueAtTime15k")
- return not any(
- s.lower() in str(fixture.fork).lower() for s in disable_strict_exception_matching
- )
-
-
-@pytest.fixture(scope="function")
-def strict_exception_matching(
- client_strict_exception_matching: bool,
- fork_strict_exception_matching: bool,
-) -> bool:
- """Return True if the test should use strict exception matching."""
- return client_strict_exception_matching and fork_strict_exception_matching
-
-
-@pytest.fixture(scope="function")
-def client(
- hive_test: HiveTest,
- client_files: dict, # configured within: rlp/conftest.py & engine/conftest.py
- environment: dict,
- client_type: ClientType,
- total_timing_data: TimingData,
-) -> Generator[Client, None, None]:
- """Initialize the client with the appropriate files and environment variables."""
- logger.info(f"Starting client ({client_type.name})...")
- with total_timing_data.time("Start client"):
- client = hive_test.start_client(
- client_type=client_type, environment=environment, files=client_files
- )
- error_message = (
- f"Unable to connect to the client container ({client_type.name}) via Hive during test "
- "setup. Check the client or Hive server logs for more information."
- )
- assert client is not None, error_message
- logger.info(f"Client ({client_type.name}) ready!")
- yield client
- logger.info(f"Stopping client ({client_type.name})...")
- with total_timing_data.time("Stop client"):
- client.stop()
- logger.info(f"Client ({client_type.name}) stopped!")
-
-
-@pytest.fixture(scope="function", autouse=True)
-def timing_data(
- total_timing_data: TimingData, client: Client
-) -> Generator[TimingData, None, None]:
- """Record timing data for the main execution of the test case."""
- with total_timing_data.time("Test case execution") as timing_data:
- yield timing_data
-
-
-class FixturesDict(Dict[Path, Fixtures]):
- """
- A dictionary caches loaded fixture files to avoid reloading the same file
- multiple times.
- """
-
- def __init__(self) -> None:
- """Initialize the dictionary that caches loaded fixture files."""
- self._fixtures: Dict[Path, Fixtures] = {}
-
- def __getitem__(self, key: Path) -> Fixtures:
- """Return the fixtures from the index file, if not found, load from disk."""
- assert key.is_file(), f"Expected a file path, got '{key}'"
- if key not in self._fixtures:
- self._fixtures[key] = Fixtures.model_validate_json(key.read_text())
- return self._fixtures[key]
-
-
-@pytest.fixture(scope="session")
-def fixture_file_loader() -> Dict[Path, Fixtures]:
- """Return a singleton dictionary that caches loaded fixture files used in all tests."""
- return FixturesDict()
-
-
-@pytest.fixture(scope="function")
-def fixture(
- fixtures_source: FixturesSource,
- fixture_file_loader: Dict[Path, Fixtures],
- test_case: TestCaseIndexFile | TestCaseStream,
-) -> BaseFixture:
- """
- Load the fixture from a file or from stream in any of the supported
- fixture formats.
-
- The fixture is either already available within the test case (if consume
- is taking input on stdin) or loaded from the fixture json file if taking
- input from disk (fixture directory with index file).
- """
- fixture: BaseFixture
- if fixtures_source.is_stdin:
- assert isinstance(test_case, TestCaseStream), "Expected a stream test case"
- fixture = test_case.fixture
- else:
- assert isinstance(test_case, TestCaseIndexFile), "Expected an index file test case"
- fixtures_file_path = fixtures_source.path / test_case.json_path
- fixtures: Fixtures = fixture_file_loader[fixtures_file_path]
- fixture = fixtures[test_case.id]
- assert isinstance(fixture, test_case.format), (
- f"Expected a {test_case.format.format_name} test fixture"
- )
- return fixture
diff --git a/src/pytest_plugins/consume/hive_simulators_reorg/__init__.py b/src/pytest_plugins/consume/hive_simulators_reorg/__init__.py
deleted file mode 100644
index 59ca949d150..00000000000
--- a/src/pytest_plugins/consume/hive_simulators_reorg/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-"""Hive simulators reorganization consumer plugin."""
diff --git a/src/pytest_plugins/consume/hive_simulators/__init__.py b/src/pytest_plugins/consume/simulators/__init__.py
similarity index 100%
rename from src/pytest_plugins/consume/hive_simulators/__init__.py
rename to src/pytest_plugins/consume/simulators/__init__.py
diff --git a/src/pytest_plugins/consume/simulators/base.py b/src/pytest_plugins/consume/simulators/base.py
new file mode 100644
index 00000000000..aa93d5297fa
--- /dev/null
+++ b/src/pytest_plugins/consume/simulators/base.py
@@ -0,0 +1,86 @@
+"""Common pytest fixtures for the Hive simulators."""
+
+from pathlib import Path
+from typing import Dict, Literal
+
+import pytest
+from hive.client import Client
+
+from ethereum_test_fixtures import (
+ BaseFixture,
+)
+from ethereum_test_fixtures.consume import TestCaseIndexFile, TestCaseStream
+from ethereum_test_fixtures.file import Fixtures
+from ethereum_test_rpc import EthRPC
+from pytest_plugins.consume.consume import FixturesSource
+
+
+@pytest.fixture(scope="function")
+def eth_rpc(client: Client) -> EthRPC:
+ """Initialize ethereum RPC client for the execution client under test."""
+ return EthRPC(f"http://{client.ip}:8545")
+
+
+@pytest.fixture(scope="function")
+def check_live_port(test_suite_name: str) -> Literal[8545, 8551]:
+ """Port used by hive to check for liveness of the client."""
+ if test_suite_name == "eest/consume-rlp":
+ return 8545
+ elif test_suite_name == "eest/consume-engine":
+ return 8551
+ raise ValueError(
+ f"Unexpected test suite name '{test_suite_name}' while setting HIVE_CHECK_LIVE_PORT."
+ )
+
+
+class FixturesDict(Dict[Path, Fixtures]):
+ """
+ A dictionary caches loaded fixture files to avoid reloading the same file
+ multiple times.
+ """
+
+ def __init__(self) -> None:
+ """Initialize the dictionary that caches loaded fixture files."""
+ self._fixtures: Dict[Path, Fixtures] = {}
+
+ def __getitem__(self, key: Path) -> Fixtures:
+ """Return the fixtures from the index file, if not found, load from disk."""
+ assert key.is_file(), f"Expected a file path, got '{key}'"
+ if key not in self._fixtures:
+ self._fixtures[key] = Fixtures.model_validate_json(key.read_text())
+ return self._fixtures[key]
+
+
+@pytest.fixture(scope="session")
+def fixture_file_loader() -> Dict[Path, Fixtures]:
+ """Return a singleton dictionary that caches loaded fixture files used in all tests."""
+ return FixturesDict()
+
+
+@pytest.fixture(scope="function")
+def fixture(
+ fixtures_source: FixturesSource,
+ fixture_file_loader: Dict[Path, Fixtures],
+ test_case: TestCaseIndexFile | TestCaseStream,
+) -> BaseFixture:
+ """
+ Load the fixture from a file or from stream in any of the supported
+ fixture formats.
+
+ The fixture is either already available within the test case (if consume
+ is taking input on stdin) or loaded from the fixture json file if taking
+ input from disk (fixture directory with index file).
+ """
+ fixture: BaseFixture
+ if fixtures_source.is_stdin:
+ assert isinstance(test_case, TestCaseStream), "Expected a stream test case"
+ fixture = test_case.fixture
+ else:
+ assert isinstance(test_case, TestCaseIndexFile), "Expected an index file test case"
+ fixtures_file_path = fixtures_source.path / test_case.json_path
+ fixtures: Fixtures = fixture_file_loader[fixtures_file_path]
+ fixture = fixtures[test_case.id]
+ assert isinstance(fixture, test_case.format), (
+ f"Expected a {test_case.format.format_name} test fixture"
+ )
+ return fixture
diff --git a/src/pytest_plugins/consume/hive_simulators/engine/__init__.py b/src/pytest_plugins/consume/simulators/engine/__init__.py
similarity index 100%
rename from src/pytest_plugins/consume/hive_simulators/engine/__init__.py
rename to src/pytest_plugins/consume/simulators/engine/__init__.py
diff --git a/src/pytest_plugins/consume/hive_simulators/engine/conftest.py b/src/pytest_plugins/consume/simulators/engine/conftest.py
similarity index 81%
rename from src/pytest_plugins/consume/hive_simulators/engine/conftest.py
rename to src/pytest_plugins/consume/simulators/engine/conftest.py
index fd997bc1eec..0206a9a514d 100644
--- a/src/pytest_plugins/consume/hive_simulators/engine/conftest.py
+++ b/src/pytest_plugins/consume/simulators/engine/conftest.py
@@ -1,8 +1,4 @@
-"""
-Pytest fixtures for the `consume engine` simulator.
-
-Configures the hive back-end & EL clients for each individual test execution.
-"""
+"""Pytest plugin for the `consume engine` simulator."""
import io
from typing import Mapping
@@ -14,12 +10,32 @@
from ethereum_test_fixtures import BlockchainEngineFixture
from ethereum_test_rpc import EngineRPC
+pytest_plugins = (
+ "pytest_plugins.consume.simulators.base",
+ "pytest_plugins.consume.simulators.single_test_client",
+ "pytest_plugins.consume.simulators.test_case_description",
+ "pytest_plugins.consume.simulators.timing_data",
+ "pytest_plugins.consume.simulators.exceptions",
+)
+
def pytest_configure(config):
"""Set the supported fixture formats for the engine simulator."""
config._supported_fixture_formats = [BlockchainEngineFixture.format_name]
+@pytest.fixture(scope="module")
+def test_suite_name() -> str:
+ """The name of the hive test suite used in this simulator."""
+ return "eest/consume-engine"
+
+
+@pytest.fixture(scope="module")
+def test_suite_description() -> str:
+ """The description of the hive test suite used in this simulator."""
+ return "Execute blockchain tests against clients using the Engine API."
+
+
@pytest.fixture(scope="function")
def engine_rpc(client: Client, client_exception_mapper: ExceptionMapper | None) -> EngineRPC:
"""Initialize engine RPC client for the execution client under test."""
@@ -33,18 +49,6 @@ def engine_rpc(client: Client, client_exception_mapper: ExceptionMapper | None)
return EngineRPC(f"http://{client.ip}:8551")
-@pytest.fixture(scope="module")
-def test_suite_name() -> str:
- """The name of the hive test suite used in this simulator."""
- return "eest/consume-engine"
-
-
-@pytest.fixture(scope="module")
-def test_suite_description() -> str:
- """The description of the hive test suite used in this simulator."""
- return "Execute blockchain tests against clients using the Engine API."
-
-
@pytest.fixture(scope="function")
def client_files(buffered_genesis: io.BufferedReader) -> Mapping[str, io.BufferedReader]:
"""Define the files that hive will start the client with."""
diff --git a/src/pytest_plugins/consume/simulators/enginex/__init__.py b/src/pytest_plugins/consume/simulators/enginex/__init__.py
new file mode 100644
index 00000000000..2cb194bb7d7
--- /dev/null
+++ b/src/pytest_plugins/consume/simulators/enginex/__init__.py
@@ -0,0 +1 @@
+"""Consume Engine test functions."""
diff --git a/src/pytest_plugins/consume/simulators/enginex/conftest.py b/src/pytest_plugins/consume/simulators/enginex/conftest.py
new file mode 100644
index 00000000000..bc9f185ba84
--- /dev/null
+++ b/src/pytest_plugins/consume/simulators/enginex/conftest.py
@@ -0,0 +1,158 @@
+"""
+Pytest fixtures for the `consume enginex` simulator.
+
+Configures the hive back-end & EL clients for test execution with BlockchainEngineXFixtures.
+"""
+
+import logging
+
+import pytest
+from hive.client import Client
+
+from ethereum_test_exceptions import ExceptionMapper
+from ethereum_test_fixtures import BlockchainEngineXFixture
+from ethereum_test_rpc import EngineRPC
+
+logger = logging.getLogger(__name__)
+
+pytest_plugins = (
+ "pytest_plugins.consume.simulators.base",
+ "pytest_plugins.consume.simulators.multi_test_client",
+ "pytest_plugins.consume.simulators.test_case_description",
+ "pytest_plugins.consume.simulators.timing_data",
+ "pytest_plugins.consume.simulators.exceptions",
+ "pytest_plugins.consume.simulators.helpers.test_tracker",
+)
+
+
+def pytest_addoption(parser):
+ """Add enginex-specific command line options."""
+ enginex_group = parser.getgroup("enginex", "EngineX simulator options")
+ enginex_group.addoption(
+ "--enginex-fcu-frequency",
+ action="store",
+ type=int,
+ default=1,
+ help=(
+ "Control forkchoice update frequency for enginex simulator. "
+ "0=disable FCUs, 1=FCU every test (default), N=FCU every Nth test per "
+ "pre-allocation group."
+ ),
+ )
+ enginex_group.addoption(
+ "--enginex-max-group-size",
+ action="store",
+ type=int,
+ default=400,
+ help=(
+ "Maximum number of tests per xdist group. Large pre-allocation groups will be "
+ "split into virtual sub-groups to improve load balancing. Default: 400."
+ ),
+ )
+
+
+def pytest_configure(config):
+ """Set the supported fixture formats and store enginex configuration."""
+ config._supported_fixture_formats = [BlockchainEngineXFixture.format_name]
+
+ # Store FCU frequency on config for access by fixtures
+ config.enginex_fcu_frequency = config.getoption("--enginex-fcu-frequency", 1)
+
+ # Store max group size on config for access during test generation
+ config.enginex_max_group_size = config.getoption("--enginex-max-group-size", 400)
+
+
+@pytest.fixture(scope="module")
+def test_suite_name() -> str:
+ """The name of the hive test suite used in this simulator."""
+ return "eest/consume-enginex"
+
+
+@pytest.fixture(scope="module")
+def test_suite_description() -> str:
+ """The description of the hive test suite used in this simulator."""
+ return (
+ "Execute blockchain tests against clients using the Engine API with "
+ "pre-allocation group optimization using Engine X fixtures."
+ )
+
+
+def pytest_collection_modifyitems(session, config, items):
+ """
+ Build pre-allocation group test counts during collection phase.
+
+ This hook analyzes all collected test items to determine how many tests
+ belong to each pre-allocation group, enabling automatic client cleanup
+ when all tests in a group are complete.
+ """
+ # Only process items for enginex simulator
+ if not hasattr(config, "_supported_fixture_formats"):
+ return
+
+ if BlockchainEngineXFixture.format_name not in config._supported_fixture_formats:
+ return
+
+ # Get the test tracker from session if available
+ test_tracker = getattr(session, "_pre_alloc_group_test_tracker", None)
+ if test_tracker is None:
+ # Tracker will be created later by the fixture, store counts on session for now
+ group_counts = {}
+ for item in items:
+ if hasattr(item, "callspec") and "test_case" in item.callspec.params:
+ test_case = item.callspec.params["test_case"]
+ if hasattr(test_case, "pre_hash"):
+ pre_hash = test_case.pre_hash
+ group_counts[pre_hash] = group_counts.get(pre_hash, 0) + 1
+
+ # Store on session for later retrieval by test_tracker fixture
+ session._pre_alloc_group_counts = group_counts
+ logger.info(
+ f"Collected {len(group_counts)} pre-allocation groups with tests: {dict(group_counts)}"
+ )
+ else:
+ # Update tracker directly if it exists
+ group_counts = {}
+ for item in items:
+ if hasattr(item, "callspec") and "test_case" in item.callspec.params:
+ test_case = item.callspec.params["test_case"]
+ if hasattr(test_case, "pre_hash"):
+ pre_hash = test_case.pre_hash
+ group_counts[pre_hash] = group_counts.get(pre_hash, 0) + 1
+
+ for pre_hash, count in group_counts.items():
+ test_tracker.set_group_test_count(pre_hash, count)
+
+ logger.info(f"Updated test tracker with {len(group_counts)} pre-allocation groups")
+
+
+@pytest.fixture(scope="function")
+def engine_rpc(client: Client, client_exception_mapper: ExceptionMapper | None) -> EngineRPC:
+ """Initialize engine RPC client for the execution client under test."""
+ if client_exception_mapper:
+ return EngineRPC(
+ f"http://{client.ip}:8551",
+ response_validation_context={
+ "exception_mapper": client_exception_mapper,
+ },
+ )
+ return EngineRPC(f"http://{client.ip}:8551")
+
+
+@pytest.fixture(scope="session")
+def fcu_frequency_tracker(request):
+ """
+ Session-scoped FCU frequency tracker for enginex simulator.
+
+ This fixture is imported from test_tracker module and configured
+ with the --enginex-fcu-frequency command line option.
+ """
+ # Import here to avoid circular imports
+ from ..helpers.test_tracker import FCUFrequencyTracker
+
+ # Get FCU frequency from pytest config (set by command line argument)
+ fcu_frequency = getattr(request.config, "enginex_fcu_frequency", 1)
+
+ tracker = FCUFrequencyTracker(fcu_frequency=fcu_frequency)
+ logger.info(f"FCU frequency tracker initialized with frequency: {fcu_frequency}")
+
+ return tracker
diff --git a/src/pytest_plugins/consume/simulators/exceptions.py b/src/pytest_plugins/consume/simulators/exceptions.py
new file mode 100644
index 00000000000..0e8d4d63a78
--- /dev/null
+++ b/src/pytest_plugins/consume/simulators/exceptions.py
@@ -0,0 +1,91 @@
+"""Pytest plugin that defines options and fixtures for client exceptions."""
+
+from typing import List
+
+import pytest
+from hive.client import ClientType
+
+from ethereum_test_exceptions import ExceptionMapper
+from ethereum_test_fixtures import (
+ BlockchainFixtureCommon,
+)
+
+from .helpers.exceptions import EXCEPTION_MAPPERS
+
+
+def pytest_addoption(parser):
+ """Hive simulator specific consume command line options."""
+ consume_group = parser.getgroup(
+ "consume", "Arguments related to consuming fixtures via a client"
+ )
+ consume_group.addoption(
+ "--disable-strict-exception-matching",
+ action="store",
+ dest="disable_strict_exception_matching",
+ default="",
+ help=(
+ "Comma-separated list of client names and/or forks which should NOT use strict "
+ "exception matching."
+ ),
+ )
+
+
+@pytest.fixture(scope="session")
+def client_exception_mapper_cache():
+ """Cache for exception mappers by client type."""
+ return {}
+
+
+@pytest.fixture(scope="function")
+def client_exception_mapper(
+ client_type: ClientType, client_exception_mapper_cache
+) -> ExceptionMapper | None:
+ """Return the exception mapper for the client type, with caching."""
+ if client_type.name not in client_exception_mapper_cache:
+ for client in EXCEPTION_MAPPERS:
+ if client in client_type.name:
+ client_exception_mapper_cache[client_type.name] = EXCEPTION_MAPPERS[client]
+ break
+ else:
+ client_exception_mapper_cache[client_type.name] = None
+
+ return client_exception_mapper_cache[client_type.name]
+
+
+@pytest.fixture(scope="session")
+def disable_strict_exception_matching(request: pytest.FixtureRequest) -> List[str]:
+ """Return the list of clients or forks that should NOT use strict exception matching."""
+ config_string = request.config.getoption("disable_strict_exception_matching")
+ return config_string.split(",") if config_string else []
+
+
+@pytest.fixture(scope="function")
+def client_strict_exception_matching(
+ client_type: ClientType,
+ disable_strict_exception_matching: List[str],
+) -> bool:
+ """Return True if the client type should use strict exception matching."""
+ return not any(
+ client.lower() in client_type.name.lower() for client in disable_strict_exception_matching
+ )
+
+
+@pytest.fixture(scope="function")
+def fork_strict_exception_matching(
+ fixture: BlockchainFixtureCommon,
+ disable_strict_exception_matching: List[str],
+) -> bool:
+ """Return True if the fork should use strict exception matching."""
+ # NOTE: `in` makes it easier for transition forks ("Prague" in "CancunToPragueAtTime15k")
+ return not any(
+ s.lower() in str(fixture.fork).lower() for s in disable_strict_exception_matching
+ )
+
+
+@pytest.fixture(scope="function")
+def strict_exception_matching(
+ client_strict_exception_matching: bool,
+ fork_strict_exception_matching: bool,
+) -> bool:
+ """Return True if the test should use strict exception matching."""
+ return client_strict_exception_matching and fork_strict_exception_matching
diff --git a/src/pytest_plugins/consume/simulators/helpers/__init__.py b/src/pytest_plugins/consume/simulators/helpers/__init__.py
new file mode 100644
index 00000000000..4464aa65b7c
--- /dev/null
+++ b/src/pytest_plugins/consume/simulators/helpers/__init__.py
@@ -0,0 +1 @@
+"""Helper classes and functions for consume hive simulators."""
diff --git a/src/pytest_plugins/consume/simulators/helpers/client_wrapper.py b/src/pytest_plugins/consume/simulators/helpers/client_wrapper.py
new file mode 100644
index 00000000000..e892d57d621
--- /dev/null
+++ b/src/pytest_plugins/consume/simulators/helpers/client_wrapper.py
@@ -0,0 +1,506 @@
+"""Client wrapper classes for managing client lifecycle in engine simulators."""
+
+import io
+import json
+import logging
+from abc import ABC, abstractmethod
+from pathlib import Path
+from typing import TYPE_CHECKING, Dict, Optional, cast
+
+from hive.client import Client, ClientType
+
+from ethereum_test_base_types import Number, to_json
+from ethereum_test_fixtures import BlockchainFixtureCommon
+from ethereum_test_fixtures.pre_alloc_groups import PreAllocGroup
+from ethereum_test_forks import Fork
+from pytest_plugins.consume.simulators.helpers.ruleset import ruleset
+
+if TYPE_CHECKING:
+ from .test_tracker import PreAllocGroupTestTracker
+
+logger = logging.getLogger(__name__)
+
+
+class ClientWrapper(ABC):
+ """
+ Abstract base class for managing client instances in engine simulators.
+
+ This class encapsulates the common logic for generating genesis configurations,
+ environment variables, and client files needed to start a client.
+ """
+
+ def __init__(self, client_type: ClientType):
+ """
+ Initialize the client wrapper.
+
+ Args:
+ client_type: The type of client to manage
+
+ """
+ self.client_type = client_type
+ self.client: Optional[Client] = None
+ self._is_started = False
+ self.test_count = 0
+
+ @abstractmethod
+ def _get_fork(self) -> Fork:
+ """Get the fork for this client."""
+ pass
+
+ @abstractmethod
+ def _get_chain_id(self) -> int:
+ """Get the chain ID for this client."""
+ pass
+
+ @abstractmethod
+ def _get_pre_alloc(self) -> dict:
+ """Get the pre-allocation for this client."""
+ pass
+
+ @abstractmethod
+ def _get_genesis_header(self) -> dict:
+ """Get the genesis header for this client."""
+ pass
+
+ def get_genesis_config(self) -> dict:
+ """
+ Get the genesis configuration for this client.
+
+ Returns:
+ Genesis configuration dict
+
+ """
+ # Convert genesis header to JSON format
+ genesis = self._get_genesis_header()
+
+ # Convert pre-allocation to JSON format
+ alloc = self._get_pre_alloc()
+
+ # NOTE: nethermind requires account keys without '0x' prefix
+ genesis["alloc"] = {k.replace("0x", ""): v for k, v in alloc.items()}
+
+ return genesis
+
+ def get_environment(self) -> dict:
+ """
+ Get the environment variables for this client.
+
+ Returns:
+ Environment variables dict
+
+ """
+ fork = self._get_fork()
+ chain_id = self._get_chain_id()
+
+ assert fork in ruleset, f"fork '{fork}' missing in hive ruleset"
+
+ # Set check live port for engine simulator
+ check_live_port = 8551 # Engine API port
+
+ return {
+ "HIVE_CHAIN_ID": str(Number(chain_id)),
+ "HIVE_FORK_DAO_VOTE": "1",
+ "HIVE_NODETYPE": "full",
+ "HIVE_CHECK_LIVE_PORT": str(check_live_port),
+ **{k: f"{v:d}" for k, v in ruleset[fork].items()},
+ }
+
+ def get_client_files(self) -> dict:
+ """
+ Get the client files dict needed for start_client().
+
+ Returns:
+ Dict with genesis.json file
+
+ """
+ # Create buffered genesis file
+ genesis_config = self.get_genesis_config()
+ genesis_json = json.dumps(genesis_config)
+ genesis_bytes = genesis_json.encode("utf-8")
+ buffered_genesis = io.BufferedReader(cast(io.RawIOBase, io.BytesIO(genesis_bytes)))
+
+ return {"/genesis.json": buffered_genesis}
+
+ def set_client(self, client: Client) -> None:
+ """
+ Set the client instance after it has been started.
+
+ Args:
+ client: The started client instance
+
+ """
+ if self._is_started:
+ raise RuntimeError(f"Client {self.client_type.name} is already set")
+
+ self.client = client
+ self._is_started = True
+ logger.info(f"Client ({self.client_type.name}) registered")
+
+ def increment_test_count(self) -> None:
+ """Increment the count of tests that have used this client."""
+ self.test_count += 1
+ logger.debug(f"Test count for {self.client_type.name}: {self.test_count}")
+
+ def stop(self) -> None:
+ """Mark the client as stopped."""
+ if self._is_started:
+ logger.info(
+ f"Marking client ({self.client_type.name}) as stopped after {self.test_count} "
+ "tests."
+ )
+ self.client = None
+ self._is_started = False
+
+ @property
+ def is_running(self) -> bool:
+ """Check if the client is currently running."""
+ return self._is_started and self.client is not None
+
+
+class RestartClient(ClientWrapper):
+ """
+ Client wrapper for the restart simulator where clients restart for each test.
+
+ This class manages clients that are started and stopped for each individual test,
+ providing complete isolation between test executions.
+ """
+
+ def __init__(self, client_type: ClientType, fixture: BlockchainFixtureCommon):
+ """
+ Initialize a restart client wrapper.
+
+ Args:
+ client_type: The type of client to manage
+ fixture: The blockchain fixture for this test
+
+ """
+ super().__init__(client_type)
+ self.fixture = fixture
+
+ def _get_fork(self) -> Fork:
+ """Get the fork from the fixture."""
+ return self.fixture.fork
+
+ def _get_chain_id(self) -> int:
+ """Get the chain ID from the fixture config."""
+ return self.fixture.config.chain_id
+
+ def _get_pre_alloc(self) -> dict:
+ """Get the pre-allocation from the fixture."""
+ return to_json(self.fixture.pre)
+
+ def _get_genesis_header(self) -> dict:
+ """Get the genesis header from the fixture."""
+ return to_json(self.fixture.genesis)
+
+
+class MultiTestClient(ClientWrapper):
+ """
+ Client wrapper for multi-test execution where clients are used across tests.
+
+ This class manages clients that are reused across multiple tests in the same
+ pre-allocation group.
+ """
+
+ def __init__(
+ self,
+ pre_hash: str,
+ client_type: ClientType,
+ pre_alloc_group: PreAllocGroup,
+ ):
+ """
+ Initialize a multi-test client wrapper.
+
+ Args:
+ pre_hash: The hash identifying the pre-allocation group
+ client_type: The type of client to manage
+ pre_alloc_group: The pre-allocation group data for this group
+
+ """
+ super().__init__(client_type)
+ self.pre_hash = pre_hash
+ self.pre_alloc_group = pre_alloc_group
+
+ def _get_fork(self) -> Fork:
+ """Get the fork from the pre-allocation group."""
+ return self.pre_alloc_group.fork
+
+ def _get_chain_id(self) -> int:
+ """Get the chain ID from the pre-allocation group environment."""
+ # TODO: Environment doesn't have chain_id field - see work_in_progress.md
+ return 1
+
+ def _get_pre_alloc(self) -> dict:
+ """Get the pre-allocation from the pre-allocation group."""
+ return to_json(self.pre_alloc_group.pre)
+
+ def _get_genesis_header(self) -> dict:
+ """Get the genesis header from the pre-allocation group."""
+ return self.pre_alloc_group.genesis().model_dump(by_alias=True)
+
+ def set_client(self, client: Client) -> None:
+ """Override to log with pre_hash information."""
+ if self._is_started:
+ raise RuntimeError(f"Client for pre-allocation group {self.pre_hash} is already set")
+
+ self.client = client
+ self._is_started = True
+ logger.info(
+ f"Multi-test client ({self.client_type.name}) registered for pre-allocation group "
+ f"{self.pre_hash}"
+ )
+
+ def stop(self) -> None:
+ """Override to log with pre_hash information and actually stop the client."""
+ if self._is_started:
+ logger.info(
+ f"Stopping multi-test client ({self.client_type.name}) for pre-allocation group "
+ f"{self.pre_hash} after {self.test_count} tests"
+ )
+ # Actually stop the Hive client
+ if self.client is not None:
+ try:
+ self.client.stop()
+ logger.debug(f"Hive client stopped for pre-allocation group {self.pre_hash}")
+ except Exception as e:
+ logger.error(
+ f"Error stopping Hive client for pre-allocation group {self.pre_hash}: {e}"
+ )
+
+ self.client = None
+ self._is_started = False
+
+
+class MultiTestClientManager:
+ """
+ Singleton manager for coordinating multi-test clients across test execution.
+
+ This class tracks all multi-test clients by their preHash and ensures proper
+ lifecycle management including cleanup at session end.
+ """
+
+ _instance: Optional["MultiTestClientManager"] = None
+ _initialized: bool
+
+ def __new__(cls) -> "MultiTestClientManager":
+ """Ensure only one instance of MultiTestClientManager exists."""
+ if cls._instance is None:
+ cls._instance = super().__new__(cls)
+ cls._instance._initialized = False
+ return cls._instance
+
+ def __init__(self) -> None:
+ """Initialize the manager if not already initialized."""
+ if hasattr(self, "_initialized") and self._initialized:
+ return
+
+ self.multi_test_clients: Dict[str, MultiTestClient] = {}
+ self.pre_alloc_path: Optional[Path] = None
+ self.test_tracker: Optional["PreAllocGroupTestTracker"] = None
+ self._initialized = True
+ logger.info("MultiTestClientManager initialized")
+
+ def set_pre_alloc_path(self, path: Path) -> None:
+ """
+ Set the path to the pre-allocation files directory.
+
+ Args:
+ path: Path to the directory containing pre-allocation JSON files
+
+ """
+ self.pre_alloc_path = path
+ logger.debug(f"Pre-alloc path set to: {path}")
+
+ def set_test_tracker(self, test_tracker: "PreAllocGroupTestTracker") -> None:
+ """
+ Set the test tracker for automatic client cleanup.
+
+ Args:
+ test_tracker: The test tracker instance
+
+ """
+ self.test_tracker = test_tracker
+ logger.debug("Test tracker set for automatic client cleanup")
+
+ def load_pre_alloc_group(self, pre_hash: str) -> PreAllocGroup:
+ """
+ Load the pre-allocation group for a given preHash.
+
+ Args:
+ pre_hash: The hash identifying the pre-allocation group
+
+ Returns:
+ The loaded PreAllocGroup
+
+ Raises:
+ RuntimeError: If pre-alloc path is not set
+ FileNotFoundError: If pre-allocation file is not found
+
+ """
+ if self.pre_alloc_path is None:
+ raise RuntimeError("Pre-alloc path not set in MultiTestClientManager")
+
+ pre_alloc_file = self.pre_alloc_path / f"{pre_hash}.json"
+ if not pre_alloc_file.exists():
+ raise FileNotFoundError(f"Pre-allocation file not found: {pre_alloc_file}")
+
+ return PreAllocGroup.model_validate_json(pre_alloc_file.read_text())
+
+ def get_or_create_multi_test_client(
+ self,
+ pre_hash: str,
+ client_type: ClientType,
+ ) -> MultiTestClient:
+ """
+ Get an existing MultiTestClient or create a new one for the given preHash.
+
+ This method doesn't start the actual client - that's done by HiveTestSuite.
+ It just manages the MultiTestClient wrapper objects.
+
+ Args:
+ pre_hash: The hash identifying the pre-allocation group
+ client_type: The type of client that will be started
+
+ Returns:
+ The MultiTestClient wrapper instance
+
+ """
+ # Check if we already have a MultiTestClient for this preHash
+ if pre_hash in self.multi_test_clients:
+ multi_test_client = self.multi_test_clients[pre_hash]
+ if multi_test_client.is_running:
+ logger.debug(f"Found existing MultiTestClient for pre-allocation group {pre_hash}")
+ return multi_test_client
+ else:
+ # MultiTestClient exists but isn't running, remove it
+ logger.warning(
+ f"Found stopped MultiTestClient for pre-allocation group {pre_hash}, removing"
+ )
+ del self.multi_test_clients[pre_hash]
+
+ # Load the pre-allocation group for this group
+ pre_alloc_group = self.load_pre_alloc_group(pre_hash)
+
+ # Create new MultiTestClient wrapper
+ multi_test_client = MultiTestClient(
+ pre_hash=pre_hash,
+ client_type=client_type,
+ pre_alloc_group=pre_alloc_group,
+ )
+
+ # Track the MultiTestClient
+ self.multi_test_clients[pre_hash] = multi_test_client
+
+ logger.info(
+ f"Created new MultiTestClient wrapper for pre-allocation group {pre_hash} "
+ f"(total tracked clients: {len(self.multi_test_clients)})"
+ )
+
+ return multi_test_client
+
+ def get_client_for_test(
+ self, pre_hash: str, test_id: Optional[str] = None
+ ) -> Optional[Client]:
+ """
+ Get the actual client instance for a test with the given preHash.
+
+ Args:
+ pre_hash: The hash identifying the pre-allocation group
+ test_id: Optional test ID for completion tracking
+
+ Returns:
+ The client instance if available, None otherwise
+
+ """
+ if pre_hash in self.multi_test_clients:
+ multi_test_client = self.multi_test_clients[pre_hash]
+ if multi_test_client.is_running:
+ multi_test_client.increment_test_count()
+ return multi_test_client.client
+ return None
+
+ def mark_test_completed(self, pre_hash: str, test_id: str) -> None:
+ """
+ Mark a test as completed and trigger automatic client cleanup if appropriate.
+
+ Args:
+ pre_hash: The hash identifying the pre-allocation group
+ test_id: The unique test identifier
+
+ """
+ if self.test_tracker is None:
+ logger.debug("No test tracker available, skipping completion tracking")
+ return
+
+ # Mark test as completed in tracker
+ is_group_complete = self.test_tracker.mark_test_completed(pre_hash, test_id)
+
+ if is_group_complete:
+ # All tests in this pre-allocation group are complete
+ self._auto_stop_client_if_complete(pre_hash)
+
+ def _auto_stop_client_if_complete(self, pre_hash: str) -> None:
+ """
+ Automatically stop the client for a pre-allocation group if all tests are complete.
+
+ Args:
+ pre_hash: The hash identifying the pre-allocation group
+
+ """
+ if pre_hash not in self.multi_test_clients:
+ logger.debug(f"No client found for pre-allocation group {pre_hash}")
+ return
+
+ multi_test_client = self.multi_test_clients[pre_hash]
+ if not multi_test_client.is_running:
+ logger.debug(f"Client for pre-allocation group {pre_hash} is already stopped")
+ return
+
+ # Stop the client and remove from tracking
+ logger.info(
+ f"Auto-stopping client for pre-allocation group {pre_hash} - "
+ f"all tests completed ({multi_test_client.test_count} tests executed)"
+ )
+
+ try:
+ multi_test_client.stop()
+ except Exception as e:
+ logger.error(f"Error auto-stopping client for pre-allocation group {pre_hash}: {e}")
+ finally:
+ # Remove from tracking to free memory
+ del self.multi_test_clients[pre_hash]
+ logger.debug(f"Removed completed client from tracking: {pre_hash}")
+
+ def stop_all_clients(self) -> None:
+ """Mark all multi-test clients as stopped."""
+ logger.info(f"Marking all {len(self.multi_test_clients)} multi-test clients as stopped")
+
+ for pre_hash, multi_test_client in list(self.multi_test_clients.items()):
+ try:
+ multi_test_client.stop()
+ except Exception as e:
+ logger.error(
+ f"Error stopping MultiTestClient for pre-allocation group {pre_hash}: {e}"
+ )
+ finally:
+ del self.multi_test_clients[pre_hash]
+
+ logger.info("All MultiTestClient wrappers cleared")
+
+ def get_client_count(self) -> int:
+ """Get the number of tracked multi-test clients."""
+ return len(self.multi_test_clients)
+
+ def get_test_counts(self) -> Dict[str, int]:
+ """Get test counts for each multi-test client."""
+ return {
+ pre_hash: client.test_count for pre_hash, client in self.multi_test_clients.items()
+ }
+
+ def reset(self) -> None:
+ """Reset the manager, clearing all state."""
+ self.stop_all_clients()
+ self.multi_test_clients.clear()
+ self.pre_alloc_path = None
+ self.test_tracker = None
+ logger.info("MultiTestClientManager reset")
diff --git a/src/pytest_plugins/consume/hive_simulators/exceptions.py b/src/pytest_plugins/consume/simulators/helpers/exceptions.py
similarity index 100%
rename from src/pytest_plugins/consume/hive_simulators/exceptions.py
rename to src/pytest_plugins/consume/simulators/helpers/exceptions.py
diff --git a/src/pytest_plugins/consume/hive_simulators/ruleset.py b/src/pytest_plugins/consume/simulators/helpers/ruleset.py
similarity index 100%
rename from src/pytest_plugins/consume/hive_simulators/ruleset.py
rename to src/pytest_plugins/consume/simulators/helpers/ruleset.py
diff --git a/src/pytest_plugins/consume/simulators/helpers/test_tracker.py b/src/pytest_plugins/consume/simulators/helpers/test_tracker.py
new file mode 100644
index 00000000000..778d7c1b2aa
--- /dev/null
+++ b/src/pytest_plugins/consume/simulators/helpers/test_tracker.py
@@ -0,0 +1,276 @@
+"""Test tracking utilities for pre-allocation group lifecycle management."""
+
+import logging
+from dataclasses import dataclass, field
+from typing import Dict, Set
+
+import pytest
+
+logger = logging.getLogger(__name__)
+
+
+@dataclass
+class PreAllocGroupTestTracker:
+ """
+ Tracks test execution progress per pre-allocation group.
+
+ This class enables automatic client cleanup by monitoring when all tests
+ in a pre-allocation group have completed execution.
+ """
+
+ group_test_counts: Dict[str, int] = field(default_factory=dict)
+ """Total number of tests per pre-allocation group (pre_hash -> count)."""
+
+ group_completed_tests: Dict[str, Set[str]] = field(default_factory=dict)
+ """Completed test IDs per pre-allocation group (pre_hash -> {test_ids})."""
+
+ def set_group_test_count(self, pre_hash: str, total_tests: int) -> None:
+ """
+ Set the total number of tests for a pre-allocation group.
+
+ Args:
+ pre_hash: The pre-allocation group hash
+ total_tests: Total number of tests in this group
+
+ """
+ if pre_hash in self.group_test_counts:
+ existing_count = self.group_test_counts[pre_hash]
+ if existing_count != total_tests:
+ logger.warning(
+ f"Pre-allocation group {pre_hash} test count mismatch: "
+ f"existing={existing_count}, new={total_tests}"
+ )
+
+ self.group_test_counts[pre_hash] = total_tests
+ if pre_hash not in self.group_completed_tests:
+ self.group_completed_tests[pre_hash] = set()
+
+ logger.debug(f"Set test count for pre-allocation group {pre_hash}: {total_tests}")
+
+ def mark_test_completed(self, pre_hash: str, test_id: str) -> bool:
+ """
+ Mark a test as completed for the given pre-allocation group.
+
+ Args:
+ pre_hash: The pre-allocation group hash
+ test_id: The unique test identifier
+
+ Returns:
+ True if all tests in the pre-allocation group are now complete
+
+ """
+ if pre_hash not in self.group_completed_tests:
+ self.group_completed_tests[pre_hash] = set()
+
+ # Avoid double-counting the same test
+ if test_id in self.group_completed_tests[pre_hash]:
+ logger.debug(f"Test {test_id} already marked as completed for group {pre_hash}")
+ return self.is_group_complete(pre_hash)
+
+ self.group_completed_tests[pre_hash].add(test_id)
+ completed_count = len(self.group_completed_tests[pre_hash])
+ total_count = self.group_test_counts.get(pre_hash, 0)
+
+ logger.debug(
+ f"Test {test_id} completed for pre-allocation group {pre_hash} "
+ f"({completed_count}/{total_count})"
+ )
+
+ is_complete = self.is_group_complete(pre_hash)
+ if is_complete:
+ logger.info(
+ f"All tests completed for pre-allocation group {pre_hash} "
+ f"({completed_count}/{total_count}) - ready for client cleanup"
+ )
+
+ return is_complete
+
+ def is_group_complete(self, pre_hash: str) -> bool:
+ """
+ Check if all tests in a pre-allocation group have completed.
+
+ Args:
+ pre_hash: The pre-allocation group hash
+
+ Returns:
+ True if all tests in the group are complete
+
+ """
+ if pre_hash not in self.group_test_counts:
+ logger.warning(f"No test count found for pre-allocation group {pre_hash}")
+ return False
+
+ total_count = self.group_test_counts[pre_hash]
+ completed_count = len(self.group_completed_tests.get(pre_hash, set()))
+
+ return completed_count >= total_count
+
+ def get_completion_status(self, pre_hash: str) -> tuple[int, int]:
+ """
+ Get completion status for a pre-allocation group.
+
+ Args:
+ pre_hash: The pre-allocation group hash
+
+ Returns:
+ Tuple of (completed_count, total_count)
+
+ """
+ total_count = self.group_test_counts.get(pre_hash, 0)
+ completed_count = len(self.group_completed_tests.get(pre_hash, set()))
+ return completed_count, total_count
+
+ def get_all_completion_status(self) -> Dict[str, tuple[int, int]]:
+ """
+ Get completion status for all tracked pre-allocation groups.
+
+ Returns:
+ Dict mapping pre_hash to (completed_count, total_count)
+
+ """
+ return {
+ pre_hash: self.get_completion_status(pre_hash) for pre_hash in self.group_test_counts
+ }
+
+ def reset_group(self, pre_hash: str) -> None:
+ """
+ Reset tracking data for a specific pre-allocation group.
+
+ Args:
+ pre_hash: The pre-allocation group hash to reset
+
+ """
+ if pre_hash in self.group_test_counts:
+ del self.group_test_counts[pre_hash]
+ if pre_hash in self.group_completed_tests:
+ del self.group_completed_tests[pre_hash]
+ logger.debug(f"Reset tracking data for pre-allocation group {pre_hash}")
+
+ def reset_all(self) -> None:
+ """Reset all tracking data."""
+ self.group_test_counts.clear()
+ self.group_completed_tests.clear()
+ logger.debug("Reset all test tracking data")
+
+
+@pytest.fixture(scope="session")
+def pre_alloc_group_test_tracker(request) -> PreAllocGroupTestTracker:
+ """
+ Session-scoped test tracker for pre-allocation group lifecycle management.
+
+ This fixture provides a centralized way to track test completion across
+ all pre-allocation groups during a pytest session.
+ """
+ tracker = PreAllocGroupTestTracker()
+
+ # Store tracker on session for access by collection hooks
+ request.session._pre_alloc_group_test_tracker = tracker
+
+ # Load pre-collected group counts if available
+ if hasattr(request.session, "_pre_alloc_group_counts"):
+ group_counts = request.session._pre_alloc_group_counts
+ for pre_hash, count in group_counts.items():
+ tracker.set_group_test_count(pre_hash, count)
+ logger.info(f"Loaded test counts for {len(group_counts)} pre-allocation groups")
+
+ logger.info("Pre-allocation group test tracker initialized")
+ return tracker
+
+
+@dataclass
+class FCUFrequencyTracker:
+ """
+ Tracks forkchoice update frequency per pre-allocation group.
+
+ This class enables controlling how often forkchoice updates are performed
+ during test execution on a per-pre-allocation-group basis.
+ """
+
+ fcu_frequency: int
+ """Frequency of FCU operations (0=disabled, 1=every test, N=every Nth test)."""
+
+ group_test_counters: Dict[str, int] = field(default_factory=dict)
+ """Test counters per pre-allocation group (pre_hash -> count)."""
+
+ def should_perform_fcu(self, pre_hash: str) -> bool:
+ """
+ Check if forkchoice update should be performed for this test.
+
+ Args:
+ pre_hash: The pre-allocation group hash
+
+ Returns:
+ True if FCU should be performed for this test
+
+ """
+ if self.fcu_frequency == 0:
+ logger.debug(f"FCU disabled for pre-allocation group {pre_hash} (frequency=0)")
+ return False
+
+ current_count = self.group_test_counters.get(pre_hash, 0)
+ should_perform = (current_count % self.fcu_frequency) == 0
+
+ logger.debug(
+ f"FCU decision for pre-allocation group {pre_hash}: "
+ f"perform={should_perform} (test_count={current_count}, "
+ f"frequency={self.fcu_frequency})"
+ )
+
+ return should_perform
+
+ def increment_test_count(self, pre_hash: str) -> None:
+ """
+ Increment test counter for pre-allocation group.
+
+ Args:
+ pre_hash: The pre-allocation group hash
+
+ """
+ current_count = self.group_test_counters.get(pre_hash, 0)
+ new_count = current_count + 1
+ self.group_test_counters[pre_hash] = new_count
+
+ logger.debug(
+ f"Incremented test count for pre-allocation group {pre_hash}: "
+ f"{current_count} -> {new_count}"
+ )
+
+ def get_test_count(self, pre_hash: str) -> int:
+ """
+ Get current test count for pre-allocation group.
+
+ Args:
+ pre_hash: The pre-allocation group hash
+
+ Returns:
+ Current test count for the group
+
+ """
+ return self.group_test_counters.get(pre_hash, 0)
+
+ def get_all_test_counts(self) -> Dict[str, int]:
+ """
+ Get test counts for all tracked pre-allocation groups.
+
+ Returns:
+ Dict mapping pre_hash to test count
+
+ """
+ return dict(self.group_test_counters)
+
+ def reset_group(self, pre_hash: str) -> None:
+ """
+ Reset test counter for a specific pre-allocation group.
+
+ Args:
+ pre_hash: The pre-allocation group hash to reset
+
+ """
+ if pre_hash in self.group_test_counters:
+ del self.group_test_counters[pre_hash]
+ logger.debug(f"Reset test counter for pre-allocation group {pre_hash}")
+
+ def reset_all(self) -> None:
+ """Reset all test counters."""
+ self.group_test_counters.clear()
+ logger.debug("Reset all FCU frequency test counters")
diff --git a/src/pytest_plugins/consume/hive_simulators/timing.py b/src/pytest_plugins/consume/simulators/helpers/timing.py
similarity index 100%
rename from src/pytest_plugins/consume/hive_simulators/timing.py
rename to src/pytest_plugins/consume/simulators/helpers/timing.py
diff --git a/src/pytest_plugins/consume/simulators/hive_tests/__init__.py b/src/pytest_plugins/consume/simulators/hive_tests/__init__.py
new file mode 100644
index 00000000000..e3ef68ee3ad
--- /dev/null
+++ b/src/pytest_plugins/consume/simulators/hive_tests/__init__.py
@@ -0,0 +1 @@
+"""Defines the Pytest test functions used by Hive Consume Simulators."""
diff --git a/src/pytest_plugins/consume/hive_simulators/engine/test_via_engine.py b/src/pytest_plugins/consume/simulators/hive_tests/test_via_engine.py
similarity index 73%
rename from src/pytest_plugins/consume/hive_simulators/engine/test_via_engine.py
rename to src/pytest_plugins/consume/simulators/hive_tests/test_via_engine.py
index 34ba6058b9c..c441cae0e05 100644
--- a/src/pytest_plugins/consume/hive_simulators/engine/test_via_engine.py
+++ b/src/pytest_plugins/consume/simulators/hive_tests/test_via_engine.py
@@ -5,14 +5,17 @@
Each `engine_newPayloadVX` is verified against the appropriate VALID/INVALID responses.
"""
+import pytest
+
from ethereum_test_exceptions import UndefinedException
-from ethereum_test_fixtures import BlockchainEngineFixture
+from ethereum_test_fixtures import BlockchainEngineFixture, BlockchainEngineXFixture
+from ethereum_test_fixtures.blockchain import FixtureHeader
from ethereum_test_rpc import EngineRPC, EthRPC
from ethereum_test_rpc.types import ForkchoiceState, JSONRPCError, PayloadStatusEnum
-from pytest_plugins.consume.hive_simulators.exceptions import GenesisBlockMismatchExceptionError
+from pytest_plugins.consume.simulators.helpers.exceptions import GenesisBlockMismatchExceptionError
from pytest_plugins.logging import get_logger
-from ..timing import TimingData
+from ..helpers.timing import TimingData
logger = get_logger(__name__)
@@ -26,25 +29,49 @@ def __init__(self, *args: object) -> None:
logger.fail(str(self))
+@pytest.mark.usefixtures("hive_test")
def test_blockchain_via_engine(
timing_data: TimingData,
eth_rpc: EthRPC,
engine_rpc: EngineRPC,
- fixture: BlockchainEngineFixture,
+ fixture: BlockchainEngineFixture | BlockchainEngineXFixture,
+ genesis_header: FixtureHeader,
strict_exception_matching: bool,
+ fcu_frequency_tracker=None, # Optional for enginex simulator
+ request=None, # For accessing test info
):
"""
- 1. Check the client genesis block hash matches `fixture.genesis.block_hash`.
+ 1. Check the client genesis block hash matches `genesis.block_hash`.
2. Execute the test case fixture blocks against the client under test using the
`engine_newPayloadVX` method from the Engine API.
- 3. For valid payloads a forkchoice update is performed to finalize the chain.
+ 3. For valid payloads a forkchoice update is performed to finalize the chain
+ (controlled by FCU frequency for enginex simulator).
"""
+ # Determine if we should perform forkchoice updates based on frequency tracker
+ should_perform_fcus = True # Default behavior for engine simulator
+ pre_hash = None
+
+ if fcu_frequency_tracker is not None and hasattr(fixture, "pre_hash"):
+ # EngineX simulator with forkchoice update frequency control
+ pre_hash = fixture.pre_hash
+ should_perform_fcus = fcu_frequency_tracker.should_perform_fcu(pre_hash)
+
+ logger.info(
+ f"Forkchoice update frequency check for pre-allocation group {pre_hash}: "
+ f"perform_fcu={should_perform_fcus} "
+ f"(frequency={fcu_frequency_tracker.fcu_frequency}, "
+ f"test_count={fcu_frequency_tracker.get_test_count(pre_hash)})"
+ )
+
+ # Always increment the test counter at the start for proper tracking
+ if fcu_frequency_tracker is not None and pre_hash is not None:
+ fcu_frequency_tracker.increment_test_count(pre_hash)
# Send a initial forkchoice update
with timing_data.time("Initial forkchoice update"):
logger.info("Sending initial forkchoice update to genesis block...")
forkchoice_response = engine_rpc.forkchoice_updated(
forkchoice_state=ForkchoiceState(
- head_block_hash=fixture.genesis.block_hash,
+ head_block_hash=genesis_header.block_hash,
),
payload_attributes=None,
version=fixture.payloads[0].forkchoice_updated_version,
@@ -58,14 +85,14 @@ def test_blockchain_via_engine(
with timing_data.time("Get genesis block"):
logger.info("Calling getBlockByNumber to get genesis block...")
- genesis_block = eth_rpc.get_block_by_number(0)
- if genesis_block["hash"] != str(fixture.genesis.block_hash):
- expected = fixture.genesis.block_hash
- got = genesis_block["hash"]
+ client_genesis_response = eth_rpc.get_block_by_number(0)
+ if client_genesis_response["hash"] != str(genesis_header.block_hash):
+ expected = genesis_header.block_hash
+ got = client_genesis_response["hash"]
logger.fail(f"Genesis block hash mismatch. Expected: {expected}, Got: {got}")
raise GenesisBlockMismatchExceptionError(
- expected_header=fixture.genesis,
- got_genesis_block=genesis_block,
+ expected_header=genesis_header,
+ got_genesis_block=client_genesis_response,
)
with timing_data.time("Payloads execution") as total_payload_timing:
@@ -136,7 +163,7 @@ def test_blockchain_via_engine(
f"Unexpected error code: {e.code}, expected: {payload.error_code}"
) from e
- if payload.valid():
+ if payload.valid() and should_perform_fcus:
with payload_timing.time(
f"engine_forkchoiceUpdatedV{payload.forkchoice_updated_version}"
):
@@ -157,4 +184,18 @@ def test_blockchain_via_engine(
f"unexpected status: want {PayloadStatusEnum.VALID},"
f" got {forkchoice_response.payload_status.status}"
)
+ elif payload.valid() and not should_perform_fcus:
+ logger.info(
+ f"Skipping forkchoice update for payload {i + 1} due to frequency setting "
+ f"(pre-allocation group: {pre_hash})"
+ )
logger.info("All payloads processed successfully.")
+
+ # Log final FCU frequency statistics for enginex simulator
+ if fcu_frequency_tracker is not None and pre_hash is not None:
+ final_count = fcu_frequency_tracker.get_test_count(pre_hash)
+ logger.info(
+ f"Test completed for pre-allocation group {pre_hash}. "
+ f"Total tests in group: {final_count}, "
+ f"FCU frequency: {fcu_frequency_tracker.fcu_frequency}"
+ )
diff --git a/src/pytest_plugins/consume/hive_simulators/rlp/test_via_rlp.py b/src/pytest_plugins/consume/simulators/hive_tests/test_via_rlp.py
similarity index 91%
rename from src/pytest_plugins/consume/hive_simulators/rlp/test_via_rlp.py
rename to src/pytest_plugins/consume/simulators/hive_tests/test_via_rlp.py
index 7146b74b2ed..cf3102d610d 100644
--- a/src/pytest_plugins/consume/hive_simulators/rlp/test_via_rlp.py
+++ b/src/pytest_plugins/consume/simulators/hive_tests/test_via_rlp.py
@@ -9,9 +9,9 @@
from ethereum_test_fixtures import BlockchainFixture
from ethereum_test_rpc import EthRPC
-from pytest_plugins.consume.hive_simulators.exceptions import GenesisBlockMismatchExceptionError
+from pytest_plugins.consume.simulators.helpers.exceptions import GenesisBlockMismatchExceptionError
-from ..timing import TimingData
+from ..helpers.timing import TimingData
logger = logging.getLogger(__name__)
diff --git a/src/pytest_plugins/consume/simulators/multi_test_client.py b/src/pytest_plugins/consume/simulators/multi_test_client.py
new file mode 100644
index 00000000000..0cd9fabf92f
--- /dev/null
+++ b/src/pytest_plugins/consume/simulators/multi_test_client.py
@@ -0,0 +1,175 @@
+"""Common pytest fixtures for simulators with multi-test client architecture."""
+
+import io
+import json
+import logging
+from typing import Dict, Generator, Mapping, cast
+
+import pytest
+from hive.client import Client, ClientType
+from hive.testing import HiveTestSuite
+
+from ethereum_test_base_types import to_json
+from ethereum_test_fixtures import BlockchainEngineXFixture
+from ethereum_test_fixtures.blockchain import FixtureHeader
+from ethereum_test_fixtures.pre_alloc_groups import PreAllocGroup
+from pytest_plugins.consume.consume import FixturesSource
+from pytest_plugins.consume.simulators.helpers.ruleset import (
+ ruleset, # TODO: generate dynamically
+)
+from pytest_plugins.filler.fixture_output import FixtureOutput
+
+from .helpers.client_wrapper import MultiTestClientManager
+from .helpers.timing import TimingData
+
+logger = logging.getLogger(__name__)
+
+
+@pytest.fixture(scope="session")
+def pre_alloc_group_cache() -> Dict[str, PreAllocGroup]:
+ """Cache for pre-allocation groups to avoid reloading from disk."""
+ return {}
+
+
+@pytest.fixture(scope="function")
+def pre_alloc_group(
+ fixture: BlockchainEngineXFixture,
+ fixtures_source: FixturesSource,
+ pre_alloc_group_cache: Dict[str, PreAllocGroup],
+) -> PreAllocGroup:
+ """Load the pre-allocation group for the current test case."""
+ pre_hash = fixture.pre_hash
+
+ # Check cache first
+ if pre_hash in pre_alloc_group_cache:
+ return pre_alloc_group_cache[pre_hash]
+
+ # Load from disk
+ if fixtures_source.is_stdin:
+ raise ValueError("Pre-allocation groups require file-based fixture input.")
+
+ # Look for pre-allocation group file using FixtureOutput path structure
+ fixture_output = FixtureOutput(output_path=fixtures_source.path)
+ pre_alloc_path = fixture_output.pre_alloc_groups_folder_path / f"{pre_hash}.json"
+ if not pre_alloc_path.exists():
+ raise FileNotFoundError(f"Pre-allocation group file not found: {pre_alloc_path}")
+
+ # Load and cache
+ with open(pre_alloc_path) as f:
+ pre_alloc_group_obj = PreAllocGroup.model_validate_json(f.read())
+
+ pre_alloc_group_cache[pre_hash] = pre_alloc_group_obj
+ return pre_alloc_group_obj
+
+
+def create_environment(pre_alloc_group: PreAllocGroup, check_live_port: int) -> dict:
+ """Define environment using PreAllocGroup data."""
+ fork = pre_alloc_group.fork
+ assert fork in ruleset, f"fork '{fork}' missing in hive ruleset"
+ return {
+ "HIVE_CHAIN_ID": "1", # TODO: Environment doesn't have chain_id - see work_in_progress.md
+ "HIVE_FORK_DAO_VOTE": "1",
+ "HIVE_NODETYPE": "full",
+ "HIVE_CHECK_LIVE_PORT": str(check_live_port),
+ **{k: f"{v:d}" for k, v in ruleset[fork].items()},
+ }
+
+
+def client_files(pre_alloc_group: PreAllocGroup) -> Mapping[str, io.BufferedReader]:
+ """Define the files that hive will start the client with."""
+ genesis = to_json(pre_alloc_group.genesis) # type: ignore
+ alloc = to_json(pre_alloc_group.pre)
+
+ # NOTE: nethermind requires account keys without '0x' prefix
+ genesis["alloc"] = {k.replace("0x", ""): v for k, v in alloc.items()}
+
+ genesis_json = json.dumps(genesis)
+ genesis_bytes = genesis_json.encode("utf-8")
+ buffered_genesis = io.BufferedReader(cast(io.RawIOBase, io.BytesIO(genesis_bytes)))
+
+ files = {}
+ files["/genesis.json"] = buffered_genesis
+ return files
+
+
+@pytest.fixture(scope="session")
+def multi_test_client_manager() -> Generator[MultiTestClientManager, None, None]:
+ """Provide singleton MultiTestClientManager with session cleanup."""
+ manager = MultiTestClientManager()
+ try:
+ yield manager
+ finally:
+ logger.info("Cleaning up multi-test clients at session end...")
+ manager.stop_all_clients()
+
+
+@pytest.fixture(scope="function")
+def genesis_header(pre_alloc_group: PreAllocGroup) -> FixtureHeader:
+ """Provide the genesis header from the pre-allocation group."""
+ return pre_alloc_group.genesis # type: ignore
+
+
+@pytest.fixture(scope="function")
+def client(
+ test_suite: HiveTestSuite,
+ client_type: ClientType,
+ total_timing_data: TimingData,
+ fixture: BlockchainEngineXFixture,
+ pre_alloc_group: PreAllocGroup,
+ multi_test_client_manager: MultiTestClientManager,
+ fixtures_source: FixturesSource,
+ pre_alloc_group_test_tracker,
+ request,
+) -> Generator[Client, None, None]:
+ """Initialize or reuse multi-test client for the test group."""
+ logger.info("🔥 MULTI-TEST CLIENT FIXTURE CALLED - Using multi-test client architecture!")
+ pre_hash = fixture.pre_hash
+ test_id = request.node.nodeid
+
+ # Set pre-alloc path in manager if not already set
+ if multi_test_client_manager.pre_alloc_path is None:
+ fixture_output = FixtureOutput(output_path=fixtures_source.path)
+ multi_test_client_manager.set_pre_alloc_path(fixture_output.pre_alloc_groups_folder_path)
+
+ # Set test tracker in manager if not already set
+ if multi_test_client_manager.test_tracker is None:
+ multi_test_client_manager.set_test_tracker(pre_alloc_group_test_tracker)
+
+ # Check for existing client
+ existing_client = multi_test_client_manager.get_client_for_test(pre_hash, test_id)
+ if existing_client is not None:
+ logger.info(f"Reusing multi-test client for pre-allocation group {pre_hash}")
+ try:
+ yield existing_client
+ finally:
+ # Mark test as completed when fixture teardown occurs
+ multi_test_client_manager.mark_test_completed(pre_hash, test_id)
+ return
+
+ # Start new multi-test client
+ logger.info(f"Starting multi-test client for pre-allocation group {pre_hash}")
+
+ with total_timing_data.time("Start multi-test client"):
+ hive_client = test_suite.start_client(
+ client_type=client_type,
+ environment=create_environment(pre_alloc_group, 8551),
+ files=client_files(pre_alloc_group),
+ )
+
+ assert hive_client is not None, (
+ f"Failed to start multi-test client for pre-allocation group {pre_hash}"
+ )
+
+ # Register with manager
+ multi_test_client = multi_test_client_manager.get_or_create_multi_test_client(
+ pre_hash=pre_hash,
+ client_type=client_type,
+ )
+ multi_test_client.set_client(hive_client)
+
+ logger.info(f"Multi-test client ready for pre-allocation group {pre_hash}")
+ try:
+ yield hive_client
+ finally:
+ # Mark test as completed when fixture teardown occurs
+ multi_test_client_manager.mark_test_completed(pre_hash, test_id)
diff --git a/src/pytest_plugins/consume/hive_simulators/rlp/__init__.py b/src/pytest_plugins/consume/simulators/rlp/__init__.py
similarity index 100%
rename from src/pytest_plugins/consume/hive_simulators/rlp/__init__.py
rename to src/pytest_plugins/consume/simulators/rlp/__init__.py
diff --git a/src/pytest_plugins/consume/hive_simulators/rlp/conftest.py b/src/pytest_plugins/consume/simulators/rlp/conftest.py
similarity index 87%
rename from src/pytest_plugins/consume/hive_simulators/rlp/conftest.py
rename to src/pytest_plugins/consume/simulators/rlp/conftest.py
index 371f1d09967..6b11ae554b0 100644
--- a/src/pytest_plugins/consume/hive_simulators/rlp/conftest.py
+++ b/src/pytest_plugins/consume/simulators/rlp/conftest.py
@@ -11,6 +11,14 @@
TestCase = TestCaseIndexFile | TestCaseStream
+pytest_plugins = (
+ "pytest_plugins.consume.simulators.base",
+ "pytest_plugins.consume.simulators.single_test_client",
+ "pytest_plugins.consume.simulators.test_case_description",
+ "pytest_plugins.consume.simulators.timing_data",
+ "pytest_plugins.consume.simulators.exceptions",
+)
+
def pytest_configure(config):
"""Set the supported fixture formats for the rlp simulator."""
diff --git a/src/pytest_plugins/consume/simulators/single_test_client.py b/src/pytest_plugins/consume/simulators/single_test_client.py
new file mode 100644
index 00000000000..4a0dbf64fab
--- /dev/null
+++ b/src/pytest_plugins/consume/simulators/single_test_client.py
@@ -0,0 +1,88 @@
+"""Common pytest fixtures for simulators with single-test client architecture."""
+
+import io
+import json
+import logging
+from typing import Generator, Literal, cast
+
+import pytest
+from hive.client import Client, ClientType
+from hive.testing import HiveTest
+
+from ethereum_test_base_types import Number, to_json
+from ethereum_test_fixtures import BlockchainFixtureCommon
+from ethereum_test_fixtures.blockchain import FixtureHeader
+from pytest_plugins.consume.simulators.helpers.ruleset import (
+ ruleset, # TODO: generate dynamically
+)
+
+from .helpers.timing import TimingData
+
+logger = logging.getLogger(__name__)
+
+
+@pytest.fixture(scope="function")
+def client_genesis(fixture: BlockchainFixtureCommon) -> dict:
+ """Convert the fixture genesis block header and pre-state to a client genesis state."""
+ genesis = to_json(fixture.genesis)
+ alloc = to_json(fixture.pre)
+ # NOTE: nethermind requires account keys without '0x' prefix
+ genesis["alloc"] = {k.replace("0x", ""): v for k, v in alloc.items()}
+ return genesis
+
+
+@pytest.fixture(scope="function")
+def environment(
+ fixture: BlockchainFixtureCommon,
+ check_live_port: Literal[8545, 8551],
+) -> dict:
+ """Define the environment that hive will start the client with."""
+ assert fixture.fork in ruleset, f"fork '{fixture.fork}' missing in hive ruleset"
+ return {
+ "HIVE_CHAIN_ID": str(Number(fixture.config.chain_id)),
+ "HIVE_FORK_DAO_VOTE": "1",
+ "HIVE_NODETYPE": "full",
+ "HIVE_CHECK_LIVE_PORT": str(check_live_port),
+ **{k: f"{v:d}" for k, v in ruleset[fixture.fork].items()},
+ }
+
+
+@pytest.fixture(scope="function")
+def buffered_genesis(client_genesis: dict) -> io.BufferedReader:
+ """Create a buffered reader for the genesis block header of the current test fixture."""
+ genesis_json = json.dumps(client_genesis)
+ genesis_bytes = genesis_json.encode("utf-8")
+ return io.BufferedReader(cast(io.RawIOBase, io.BytesIO(genesis_bytes)))
+
+
+@pytest.fixture(scope="function")
+def genesis_header(fixture: BlockchainFixtureCommon) -> FixtureHeader:
+ """Provide the genesis header from the pre-allocation group."""
+ return fixture.genesis # type: ignore
+
+
+@pytest.fixture(scope="function")
+def client(
+ hive_test: HiveTest,
+ client_files: dict, # configured within: rlp/conftest.py & engine/conftest.py
+ environment: dict,
+ client_type: ClientType,
+ total_timing_data: TimingData,
+) -> Generator[Client, None, None]:
+ """Initialize the client with the appropriate files and environment variables."""
+ logger.info(f"Starting client ({client_type.name})...")
+ with total_timing_data.time("Start client"):
+ client = hive_test.start_client(
+ client_type=client_type, environment=environment, files=client_files
+ )
+ error_message = (
+ f"Unable to connect to the client container ({client_type.name}) via Hive during test "
+ "setup. Check the client or Hive server logs for more information."
+ )
+ assert client is not None, error_message
+ logger.info(f"Client ({client_type.name}) ready!")
+ yield client
+ logger.info(f"Stopping client ({client_type.name})...")
+ with total_timing_data.time("Stop client"):
+ client.stop()
+ logger.info(f"Client ({client_type.name}) stopped!")
diff --git a/src/pytest_plugins/consume/simulators/test_case_description.py b/src/pytest_plugins/consume/simulators/test_case_description.py
new file mode 100644
index 00000000000..c56f24d35c5
--- /dev/null
+++ b/src/pytest_plugins/consume/simulators/test_case_description.py
@@ -0,0 +1,176 @@
+"""Pytest fixtures that help create the test case "Description" displayed in the Hive UI."""
+
+import logging
+import textwrap
+import urllib
+import warnings
+from typing import List
+
+import pytest
+from hive.client import ClientType
+
+from ethereum_test_fixtures import BaseFixture
+from ethereum_test_fixtures.consume import TestCaseIndexFile, TestCaseStream
+from pytest_plugins.pytest_hive.hive_info import ClientFile, HiveInfo
+
+logger = logging.getLogger(__name__)
+
+
+@pytest.fixture(scope="function")
+def hive_clients_yaml_target_filename() -> str:
+ """Return the name of the target clients YAML file."""
+ return "clients_eest.yaml"
+
+
+@pytest.fixture(scope="function")
+def hive_clients_yaml_generator_command(
+ client_type: ClientType,
+ client_file: ClientFile,
+ hive_clients_yaml_target_filename: str,
+ hive_info: HiveInfo,
+) -> str:
+ """Generate a shell command that creates a clients YAML file for the current client."""
+ try:
+ if not client_file:
+ raise ValueError("No client information available - try updating hive")
+ client_config = [c for c in client_file.root if c.client in client_type.name]
+ if not client_config:
+ raise ValueError(f"Client '{client_type.name}' not found in client file")
+ try:
+ yaml_content = ClientFile(root=[client_config[0]]).yaml().replace(" ", " ")
+ return f'echo "\\\n{yaml_content}" > {hive_clients_yaml_target_filename}'
+ except Exception as e:
+ raise ValueError(f"Failed to generate YAML: {str(e)}") from e
+ except ValueError as e:
+ error_message = str(e)
+ warnings.warn(
+ f"{error_message}. The Hive clients YAML generator command will not be available.",
+ stacklevel=2,
+ )
+
+ issue_title = f"Client {client_type.name} configuration issue"
+ issue_body = f"Error: {error_message}\nHive version: {hive_info.commit}\n"
+ issue_url = f"https://github.com/ethereum/execution-spec-tests/issues/new?title={urllib.parse.quote(issue_title)}&body={urllib.parse.quote(issue_body)}"
+
+ return (
+ f"Error: {error_message}\n"
+ f'Please create an issue to report this problem.'
+ )
+
+
+@pytest.fixture(scope="function")
+def filtered_hive_options(hive_info: HiveInfo) -> List[str]:
+ """Filter Hive command options to remove unwanted options."""
+ logger.info("Hive info: %s", hive_info.command)
+
+ unwanted_options = [
+ "--client", # gets overwritten: we specify a single client; the one from the test case
+ "--client-file", # gets overwritten: we'll write our own client file
+ "--results-root", # use default value instead (or you have to pass it to ./hiveview)
+ "--sim.limit", # gets overwritten: we only run the current test case id
+ "--sim.parallelism", # skip; we'll only be running a single test
+ ]
+
+ command_parts = []
+ skip_next = False
+ for part in hive_info.command:
+ if skip_next:
+ skip_next = False
+ continue
+
+ if part in unwanted_options:
+ skip_next = True
+ continue
+
+ if any(part.startswith(f"{option}=") for option in unwanted_options):
+ continue
+
+ command_parts.append(part)
+
+ return command_parts
+
+
+@pytest.fixture(scope="function")
+def hive_client_config_file_parameter(hive_clients_yaml_target_filename: str) -> str:
+ """Return the hive client config file parameter."""
+ return f"--client-file {hive_clients_yaml_target_filename}"
+
+
+@pytest.fixture(scope="function")
+def hive_consume_command(
+ test_case: TestCaseIndexFile | TestCaseStream,
+ hive_client_config_file_parameter: str,
+ filtered_hive_options: List[str],
+ client_type: ClientType,
+) -> str:
+ """Command to run the test within hive."""
+ command_parts = filtered_hive_options.copy()
+ command_parts.append(f"{hive_client_config_file_parameter}")
+ command_parts.append(f"--client={client_type.name}")
+ command_parts.append(f'--sim.limit="id:{test_case.id}"')
+
+ return " ".join(command_parts)
+
+
+@pytest.fixture(scope="function")
+def hive_dev_command(
+ client_type: ClientType,
+ hive_client_config_file_parameter: str,
+) -> str:
+ """Return the command used to instantiate hive alongside the `consume` command."""
+ return f"./hive --dev {hive_client_config_file_parameter} --client {client_type.name}"
+
+
+@pytest.fixture(scope="function")
+def eest_consume_command(
+ test_suite_name: str,
+ test_case: TestCaseIndexFile | TestCaseStream,
+ fixture_source_flags: List[str],
+) -> str:
+ """Commands to run the test within EEST using a hive dev back-end."""
+ flags = " ".join(fixture_source_flags)
+ return (
+ f"uv run consume {test_suite_name.split('-')[-1]} "
+ f'{flags} --sim.limit="id:{test_case.id}" -v -s'
+ )
+
+
+@pytest.fixture(scope="function")
+def test_case_description(
+ fixture: BaseFixture,
+ test_case: TestCaseIndexFile | TestCaseStream,
+ hive_clients_yaml_generator_command: str,
+ hive_consume_command: str,
+ hive_dev_command: str,
+ eest_consume_command: str,
+) -> str:
+ """Create the description of the current blockchain fixture test case."""
+ test_url = fixture.info.get("url", "")
+
+ if "description" not in fixture.info or fixture.info["description"] is None:
+ test_docstring = "No documentation available."
+ else:
+ # this prefix was included in the fixture description field for fixtures <= v4.3.0
+ test_docstring = fixture.info["description"].replace("Test function documentation:\n", "") # type: ignore
+
+ description = textwrap.dedent(f"""
+ Test Details
+ {test_case.id}
+ {f'[source]' if test_url else ""}
+
+ {test_docstring}
+
+ Run This Test Locally:
+ To run this test in hive:
+ {hive_clients_yaml_generator_command}
+ {hive_consume_command}
+
+ Advanced: Run the test against a hive developer backend using EEST's consume
command
+ Create the client YAML file, as above, then:
+ 1. Start hive in dev mode: {hive_dev_command}
+ 2. In the EEST repository root: {eest_consume_command}
+ """) # noqa: E501
+
+ description = description.strip()
+ description = description.replace("\n", "
")
+ return description
diff --git a/src/pytest_plugins/consume/simulators/timing_data.py b/src/pytest_plugins/consume/simulators/timing_data.py
new file mode 100644
index 00000000000..e63fedfad0b
--- /dev/null
+++ b/src/pytest_plugins/consume/simulators/timing_data.py
@@ -0,0 +1,43 @@
+"""Pytest plugin that helps measure and log timing data in Hive simulators."""
+
+from typing import Generator
+
+import pytest
+import rich
+from hive.client import Client
+
+from .helpers.timing import TimingData
+
+
+def pytest_addoption(parser):
+ """Hive simulator specific consume command line options."""
+ consume_group = parser.getgroup(
+ "consume", "Arguments related to consuming fixtures via a client"
+ )
+ consume_group.addoption(
+ "--timing-data",
+ action="store_true",
+ dest="timing_data",
+ default=False,
+ help="Log the timing data for each test case execution.",
+ )
+
+
+@pytest.fixture(scope="function", autouse=True)
+def total_timing_data(request) -> Generator[TimingData, None, None]:
+ """Record timing data for various stages of executing test case."""
+ with TimingData("Total (seconds)") as total_timing_data:
+ yield total_timing_data
+ if request.config.getoption("timing_data"):
+ rich.print(f"\n{total_timing_data.formatted()}")
+ if hasattr(request.node, "rep_call"): # make available for test reports
+ request.node.rep_call.timings = total_timing_data
+
+
+@pytest.fixture(scope="function", autouse=True)
+def timing_data(
+ total_timing_data: TimingData, client: Client
+) -> Generator[TimingData, None, None]:
+ """Record timing data for the main execution of the test case."""
+ with total_timing_data.time("Test case execution") as timing_data:
+ yield timing_data
diff --git a/src/pytest_plugins/execute/rpc/hive.py b/src/pytest_plugins/execute/rpc/hive.py
index bfb2ed28b43..c73a937c950 100644
--- a/src/pytest_plugins/execute/rpc/hive.py
+++ b/src/pytest_plugins/execute/rpc/hive.py
@@ -39,7 +39,7 @@
Withdrawal,
)
from ethereum_test_types import Requests
-from pytest_plugins.consume.hive_simulators.ruleset import ruleset
+from pytest_plugins.consume.simulators.helpers.ruleset import ruleset
class HashList(RootModel[List[Hash]]):
diff --git a/src/pytest_plugins/filler/filler.py b/src/pytest_plugins/filler/filler.py
index 9c0fae9a460..d84b0c4b2a7 100644
--- a/src/pytest_plugins/filler/filler.py
+++ b/src/pytest_plugins/filler/filler.py
@@ -26,12 +26,12 @@
from ethereum_test_base_types import Account, Address, Alloc, ReferenceSpec
from ethereum_test_fixtures import (
BaseFixture,
- BlockchainEngineReorgFixture,
+ BlockchainEngineXFixture,
FixtureCollector,
FixtureConsumer,
LabeledFixtureFormat,
- SharedPreState,
- SharedPreStateGroup,
+ PreAllocGroup,
+ PreAllocGroups,
TestInfo,
)
from ethereum_test_forks import Fork, get_transition_fork_predecessor, get_transition_forks
@@ -55,9 +55,9 @@ def calculate_post_state_diff(post_state: Alloc, genesis_state: Alloc) -> Alloc:
"""
Calculate the state difference between post_state and genesis_state.
- This function enables significant space savings in reorg fixtures by storing
+ This function enables significant space savings in Engine X fixtures by storing
only the accounts that changed during test execution, rather than the full
- post-state which may contain thousands of unchanged shared accounts.
+ post-state which may contain thousands of unchanged accounts.
Returns an Alloc containing only the accounts that:
- Changed between genesis and post state (balance, nonce, storage, code)
@@ -66,7 +66,7 @@ def calculate_post_state_diff(post_state: Alloc, genesis_state: Alloc) -> Alloc:
Args:
post_state: Final state after test execution
- genesis_state: Shared genesis pre-allocation state
+ genesis_state: Genesis pre-allocation state
Returns:
Alloc containing only the state differences for efficient storage
@@ -242,18 +242,18 @@ def pytest_addoption(parser: pytest.Parser):
),
)
test_group.addoption(
- "--generate-shared-pre",
+ "--generate-pre-alloc-groups",
action="store_true",
- dest="generate_shared_pre",
+ dest="generate_pre_alloc_groups",
default=False,
- help="Generate shared pre-allocation state (phase 1 only).",
+ help="Generate pre-allocation groups (phase 1 only).",
)
test_group.addoption(
- "--use-shared-pre",
+ "--use-pre-alloc-groups",
action="store_true",
- dest="use_shared_pre",
+ dest="use_pre_alloc_groups",
default=False,
- help="Fill tests using an existing shared pre-allocation state (phase 2 only).",
+ help="Fill tests using existing pre-allocation groups (phase 2 only).",
)
debug_group = parser.getgroup("debug", "Arguments defining debug behavior")
@@ -282,22 +282,24 @@ def pytest_sessionstart(session: pytest.Session):
"""
Initialize session-level state.
- Either initialize an empty shared pre-state container for phase 1 or
- load the shared pre-allocation state for phase 2 execution.
+ Either initialize an empty pre-allocation groups container for phase 1 or
+ load the pre-allocation groups for phase 2 execution.
"""
- # Initialize empty shared pre-state container for phase 1
- if session.config.getoption("generate_shared_pre"):
- session.config.shared_pre_state = SharedPreState(root={}) # type: ignore[attr-defined]
-
- # Load the pre-state for phase 2
- if session.config.getoption("use_shared_pre"):
- shared_pre_alloc_folder = session.config.fixture_output.shared_pre_alloc_folder_path # type: ignore[attr-defined]
- if shared_pre_alloc_folder.exists():
- session.config.shared_pre_state = SharedPreState.from_folder(shared_pre_alloc_folder) # type: ignore[attr-defined]
+ # Initialize empty pre-allocation groups container for phase 1
+ if session.config.getoption("generate_pre_alloc_groups"):
+ session.config.pre_alloc_groups = PreAllocGroups(root={}) # type: ignore[attr-defined]
+
+ # Load the pre-allocation groups for phase 2
+ if session.config.getoption("use_pre_alloc_groups"):
+ pre_alloc_groups_folder = session.config.fixture_output.pre_alloc_groups_folder_path # type: ignore[attr-defined]
+ if pre_alloc_groups_folder.exists():
+ session.config.pre_alloc_groups = PreAllocGroups.from_folder( # type: ignore[attr-defined]
+ pre_alloc_groups_folder
+ )
else:
pytest.exit(
- f"Shared pre-alloc file not found: {shared_pre_alloc_folder}. "
- "Run phase 1 with --generate-shared-alloc first.",
+ f"Pre-allocation groups folder not found: {pre_alloc_groups_folder}. "
+ "Run phase 1 with --generate-pre-alloc-groups first.",
returncode=pytest.ExitCode.USAGE_ERROR,
)
@@ -337,7 +339,7 @@ def pytest_configure(config):
if (
not config.getoption("disable_html")
and config.getoption("htmlpath") is None
- and not config.getoption("generate_shared_pre")
+ and not config.getoption("generate_pre_alloc_groups")
):
config.option.htmlpath = config.fixture_output.directory / default_html_report_file_path()
@@ -413,27 +415,27 @@ def pytest_terminal_summary(
return
stats = terminalreporter.stats
if "passed" in stats and stats["passed"]:
- # Custom message for Phase 1 (shared pre-allocation generation)
- if config.getoption("generate_shared_pre"):
+ # Custom message for Phase 1 (pre-allocation group generation)
+ if config.getoption("generate_pre_alloc_groups"):
# Generate summary stats
- shared_pre_state: SharedPreState
+ pre_alloc_groups: PreAllocGroups
if config.pluginmanager.hasplugin("xdist"):
- # Load shared pre-state from disk
- shared_pre_state = SharedPreState.from_folder(
- config.fixture_output.shared_pre_alloc_folder_path # type: ignore[attr-defined]
+ # Load pre-allocation groups from disk
+ pre_alloc_groups = PreAllocGroups.from_folder(
+ config.fixture_output.pre_alloc_groups_folder_path # type: ignore[attr-defined]
)
else:
- assert hasattr(config, "shared_pre_state")
- shared_pre_state = config.shared_pre_state # type: ignore[attr-defined]
+ assert hasattr(config, "pre_alloc_groups")
+ pre_alloc_groups = config.pre_alloc_groups # type: ignore[attr-defined]
- total_groups = len(shared_pre_state.root)
+ total_groups = len(pre_alloc_groups.root)
total_accounts = sum(
- group.pre_account_count for group in shared_pre_state.root.values()
+ group.pre_account_count for group in pre_alloc_groups.root.values()
)
terminalreporter.write_sep(
"=",
- f" Phase 1 Complete: Generated {total_groups} shared pre-allocation groups "
+ f" Phase 1 Complete: Generated {total_groups} pre-allocation groups "
f"({total_accounts} total accounts) ",
bold=True,
green=True,
@@ -877,32 +879,34 @@ def __init__(self, *args, **kwargs):
super(BaseTestWrapper, self).__init__(*args, **kwargs)
self._request = request
- # Phase 1: Generate shared pre-state
- if fixture_format is BlockchainEngineReorgFixture and request.config.getoption(
- "generate_shared_pre"
+ # Phase 1: Generate pre-allocation groups
+ if fixture_format is BlockchainEngineXFixture and request.config.getoption(
+ "generate_pre_alloc_groups"
):
- self.update_shared_pre_state(
- request.config.shared_pre_state, fork, request.node.nodeid
+ self.update_pre_alloc_groups(
+ request.config.pre_alloc_groups, fork, request.node.nodeid
)
return # Skip fixture generation in phase 1
- # Phase 2: Use shared pre-state (only for BlockchainEngineReorgFixture)
+ # Phase 2: Use pre-allocation groups (only for BlockchainEngineXFixture)
pre_alloc_hash = None
- if fixture_format is BlockchainEngineReorgFixture and request.config.getoption(
- "use_shared_pre"
+ if fixture_format is BlockchainEngineXFixture and request.config.getoption(
+ "use_pre_alloc_groups"
):
- pre_alloc_hash = self.compute_shared_pre_alloc_hash(fork=fork)
- if pre_alloc_hash not in request.config.shared_pre_state:
+ pre_alloc_hash = self.compute_pre_alloc_group_hash(fork=fork)
+ if pre_alloc_hash not in request.config.pre_alloc_groups:
pre_alloc_path = (
- request.config.fixture_output.shared_pre_alloc_folder_path
+ request.config.fixture_output.pre_alloc_groups_folder_path
/ pre_alloc_hash
)
raise ValueError(
- f"Pre-allocation hash {pre_alloc_hash} not found in shared pre-state. "
- f"Please check the shared pre-state file at: {pre_alloc_path}. "
- "Make sure phase 1 (--generate-shared-pre) was run before phase 2."
+ f"Pre-allocation hash {pre_alloc_hash} not found in "
+ f"pre-allocation groups. "
+ f"Please check the pre-allocation groups file at: {pre_alloc_path}. "
+ "Make sure phase 1 (--generate-pre-alloc-groups) was run "
+ "before phase 2."
)
- group: SharedPreStateGroup = request.config.shared_pre_state[pre_alloc_hash]
+ group: PreAllocGroup = request.config.pre_alloc_groups[pre_alloc_hash]
self.pre = group.pre
fixture = self.generate(
@@ -911,17 +915,17 @@ def __init__(self, *args, **kwargs):
fixture_format=fixture_format,
)
- # Post-process for reorg format (add pre_hash and state diff)
+ # Post-process for Engine X format (add pre_hash and state diff)
if (
- fixture_format is BlockchainEngineReorgFixture
- and request.config.getoption("use_shared_pre")
+ fixture_format is BlockchainEngineXFixture
+ and request.config.getoption("use_pre_alloc_groups")
and pre_alloc_hash is not None
):
fixture.pre_hash = pre_alloc_hash
# Calculate state diff for efficiency
if hasattr(fixture, "post_state") and fixture.post_state is not None:
- group = request.config.shared_pre_state[pre_alloc_hash]
+ group = request.config.pre_alloc_groups[pre_alloc_hash]
fixture.post_state_diff = calculate_post_state_diff(
fixture.post_state, group.pre
)
@@ -964,32 +968,34 @@ def pytest_generate_tests(metafunc: pytest.Metafunc):
"""
for test_type in BaseTest.spec_types.values():
if test_type.pytest_parameter_name() in metafunc.fixturenames:
- generate_shared_pre = metafunc.config.getoption("generate_shared_pre", False)
- use_shared_pre = metafunc.config.getoption("use_shared_pre", False)
+ generate_pre_alloc_groups = metafunc.config.getoption(
+ "generate_pre_alloc_groups", False
+ )
+ use_pre_alloc_groups = metafunc.config.getoption("use_pre_alloc_groups", False)
- if generate_shared_pre or use_shared_pre:
- # When shared alloc flags are set, only generate BlockchainEngineReorgFixture
+ if generate_pre_alloc_groups or use_pre_alloc_groups:
+ # When pre-allocation group flags are set, only generate BlockchainEngineXFixture
supported_formats = [
format_item
for format_item in test_type.supported_fixture_formats
if (
- format_item is BlockchainEngineReorgFixture
+ format_item is BlockchainEngineXFixture
or (
isinstance(format_item, LabeledFixtureFormat)
- and format_item.format is BlockchainEngineReorgFixture
+ and format_item.format is BlockchainEngineXFixture
)
)
]
else:
- # Filter out BlockchainEngineReorgFixture if shared alloc flags not set
+ # Filter out BlockchainEngineXFixture if pre-allocation group flags not set
supported_formats = [
format_item
for format_item in test_type.supported_fixture_formats
if not (
- format_item is BlockchainEngineReorgFixture
+ format_item is BlockchainEngineXFixture
or (
isinstance(format_item, LabeledFixtureFormat)
- and format_item.format is BlockchainEngineReorgFixture
+ and format_item.format is BlockchainEngineXFixture
)
)
]
@@ -1067,19 +1073,19 @@ def pytest_sessionfinish(session: pytest.Session, exitstatus: int):
"""
Perform session finish tasks.
- - Save shared pre-allocation state (phase 1)
+ - Save pre-allocation groups (phase 1)
- Remove any lock files that may have been created.
- Generate index file for all produced fixtures.
- Create tarball of the output directory if the output is a tarball.
"""
- # Save shared pre-state after phase 1
+ # Save pre-allocation groups after phase 1
fixture_output = session.config.fixture_output # type: ignore[attr-defined]
- if session.config.getoption("generate_shared_pre") and hasattr(
- session.config, "shared_pre_state"
+ if session.config.getoption("generate_pre_alloc_groups") and hasattr(
+ session.config, "pre_alloc_groups"
):
- shared_pre_alloc_folder = fixture_output.shared_pre_alloc_folder_path
- shared_pre_alloc_folder.mkdir(parents=True, exist_ok=True)
- session.config.shared_pre_state.to_folder(shared_pre_alloc_folder)
+ pre_alloc_groups_folder = fixture_output.pre_alloc_groups_folder_path
+ pre_alloc_groups_folder.mkdir(parents=True, exist_ok=True)
+ session.config.pre_alloc_groups.to_folder(pre_alloc_groups_folder)
return
if xdist.is_xdist_worker(session):
@@ -1094,7 +1100,7 @@ def pytest_sessionfinish(session: pytest.Session, exitstatus: int):
# Generate index file for all produced fixtures.
if session.config.getoption("generate_index") and not session.config.getoption(
- "generate_shared_pre"
+ "generate_pre_alloc_groups"
):
generate_fixtures_index(
fixture_output.directory, quiet_mode=True, force_flag=False, disable_infer_format=False
diff --git a/src/pytest_plugins/filler/fixture_output.py b/src/pytest_plugins/filler/fixture_output.py
index 673d3972e8b..0afb0d2e051 100644
--- a/src/pytest_plugins/filler/fixture_output.py
+++ b/src/pytest_plugins/filler/fixture_output.py
@@ -7,7 +7,7 @@
import pytest
from pydantic import BaseModel, Field
-from ethereum_test_fixtures.blockchain import BlockchainEngineReorgFixture
+from ethereum_test_fixtures.blockchain import BlockchainEngineXFixture
class FixtureOutput(BaseModel):
@@ -29,13 +29,13 @@ class FixtureOutput(BaseModel):
default=False,
description="Clean (remove) the output directory before filling fixtures.",
)
- generate_shared_pre: bool = Field(
+ generate_pre_alloc_groups: bool = Field(
default=False,
- description="Generate shared pre-allocation state (phase 1).",
+ description="Generate pre-allocation groups (phase 1).",
)
- use_shared_pre: bool = Field(
+ use_pre_alloc_groups: bool = Field(
default=False,
- description="Use existing shared pre-allocation state (phase 2).",
+ description="Use existing pre-allocation groups (phase 2).",
)
@property
@@ -62,10 +62,10 @@ def is_stdout(self) -> bool:
return self.directory.name == "stdout"
@property
- def shared_pre_alloc_folder_path(self) -> Path:
- """Return the path for shared pre-allocation state file."""
- reorg_dir = BlockchainEngineReorgFixture.output_base_dir_name()
- return self.directory / reorg_dir / "pre_alloc"
+ def pre_alloc_groups_folder_path(self) -> Path:
+ """Return the path for pre-allocation groups folder."""
+ engine_x_dir = BlockchainEngineXFixture.output_base_dir_name()
+ return self.directory / engine_x_dir / "pre_alloc"
@staticmethod
def strip_tarball_suffix(path: Path) -> Path:
@@ -86,16 +86,16 @@ def is_directory_usable_for_phase(self) -> bool:
if not self.directory.exists():
return True
- if self.generate_shared_pre:
+ if self.generate_pre_alloc_groups:
# Phase 1: Directory must be completely empty
return self.is_directory_empty()
- elif self.use_shared_pre:
- # Phase 2: Only shared alloc file must exist, no other files allowed
- if not self.shared_pre_alloc_folder_path.exists():
+ elif self.use_pre_alloc_groups:
+ # Phase 2: Only pre-allocation groups must exist, no other files allowed
+ if not self.pre_alloc_groups_folder_path.exists():
return False
- # Check that only the shared prealloc file exists
+ # Check that only the pre-allocation group files exist
existing_files = {f for f in self.directory.rglob("*") if f.is_file()}
- allowed_files = set(self.shared_pre_alloc_folder_path.rglob("*.json"))
+ allowed_files = set(self.pre_alloc_groups_folder_path.rglob("*.json"))
return existing_files == allowed_files
else:
# Normal filling: Directory must be empty
@@ -160,18 +160,18 @@ def create_directories(self, is_master: bool) -> None:
if self.directory.exists() and not self.is_directory_usable_for_phase():
summary = self.get_directory_summary()
- if self.generate_shared_pre:
+ if self.generate_pre_alloc_groups:
raise ValueError(
f"Output directory '{self.directory}' must be completely empty for "
- f"shared allocation generation (phase 1). Contains: {summary}. "
+ f"pre-allocation group generation (phase 1). Contains: {summary}. "
"Use --clean to remove all existing files."
)
- elif self.use_shared_pre:
- if not self.shared_pre_alloc_folder_path.exists():
+ elif self.use_pre_alloc_groups:
+ if not self.pre_alloc_groups_folder_path.exists():
raise ValueError(
- "Shared pre-allocation file not found at "
- f"'{self.shared_pre_alloc_folder_path}'. "
- "Run phase 1 with --generate-shared-pre first."
+ "Pre-allocation groups folder not found at "
+ f"'{self.pre_alloc_groups_folder_path}'. "
+ "Run phase 1 with --generate-pre-alloc-groups first."
)
else:
raise ValueError(
@@ -184,9 +184,9 @@ def create_directories(self, is_master: bool) -> None:
self.directory.mkdir(parents=True, exist_ok=True)
self.metadata_dir.mkdir(parents=True, exist_ok=True)
- # Create shared allocation directory for phase 1
- if self.generate_shared_pre:
- self.shared_pre_alloc_folder_path.parent.mkdir(parents=True, exist_ok=True)
+ # Create pre-allocation groups directory for phase 1
+ if self.generate_pre_alloc_groups:
+ self.pre_alloc_groups_folder_path.parent.mkdir(parents=True, exist_ok=True)
def create_tarball(self) -> None:
"""Create tarball of the output directory if configured to do so."""
@@ -207,6 +207,6 @@ def from_config(cls, config: pytest.Config) -> "FixtureOutput":
flat_output=config.getoption("flat_output"),
single_fixture_per_file=config.getoption("single_fixture_per_file"),
clean=config.getoption("clean"),
- generate_shared_pre=config.getoption("generate_shared_pre"),
- use_shared_pre=config.getoption("use_shared_pre"),
+ generate_pre_alloc_groups=config.getoption("generate_pre_alloc_groups"),
+ use_pre_alloc_groups=config.getoption("use_pre_alloc_groups"),
)
diff --git a/src/pytest_plugins/filler/pre_alloc.py b/src/pytest_plugins/filler/pre_alloc.py
index d2b26084257..f6730cdccac 100644
--- a/src/pytest_plugins/filler/pre_alloc.py
+++ b/src/pytest_plugins/filler/pre_alloc.py
@@ -305,9 +305,9 @@ def contract_address_iterator(
contract_address_increments: int,
) -> Iterator[Address]:
"""Return iterator over contract addresses with dynamic scoping."""
- if request.config.getoption("generate_shared_pre", default=False) or request.config.getoption(
- "use_shared_pre", default=False
- ):
+ if request.config.getoption(
+ "generate_pre_alloc_groups", default=False
+ ) or request.config.getoption("use_pre_alloc_groups", default=False):
# Use a starting address that is derived from the test node
contract_start_address = sha256_from_string(request.node.nodeid)
return iter(
@@ -325,9 +325,9 @@ def eoa_by_index(i: int) -> EOA:
@pytest.fixture(scope="function")
def eoa_iterator(request: pytest.FixtureRequest) -> Iterator[EOA]:
"""Return iterator over EOAs copies with dynamic scoping."""
- if request.config.getoption("generate_shared_pre", default=False) or request.config.getoption(
- "use_shared_pre", default=False
- ):
+ if request.config.getoption(
+ "generate_pre_alloc_groups", default=False
+ ) or request.config.getoption("use_pre_alloc_groups", default=False):
# Use a starting address that is derived from the test node
eoa_start_pk = sha256_from_string(request.node.nodeid)
return iter(
diff --git a/src/pytest_plugins/filler/tests/test_prealloc_group.py b/src/pytest_plugins/filler/tests/test_prealloc_group.py
index 64900c94308..22639a2bc99 100644
--- a/src/pytest_plugins/filler/tests/test_prealloc_group.py
+++ b/src/pytest_plugins/filler/tests/test_prealloc_group.py
@@ -36,7 +36,7 @@ def test_pre_alloc_group_separate():
# Create test without marker
test1 = MockTest(pre=pre, genesis_environment=env)
- hash1 = test1.compute_shared_pre_alloc_hash(fork)
+ hash1 = test1.compute_pre_alloc_group_hash(fork)
# Create test with "separate" marker
mock_request = Mock()
@@ -47,14 +47,14 @@ def test_pre_alloc_group_separate():
mock_request.node.get_closest_marker = Mock(return_value=mock_marker)
test2 = MockTest(pre=pre, genesis_environment=env, request=mock_request)
- hash2 = test2.compute_shared_pre_alloc_hash(fork)
+ hash2 = test2.compute_pre_alloc_group_hash(fork)
# Hashes should be different due to "separate" marker
assert hash1 != hash2
# Create another test without marker - should match first test
test3 = MockTest(pre=pre, genesis_environment=env)
- hash3 = test3.compute_shared_pre_alloc_hash(fork)
+ hash3 = test3.compute_pre_alloc_group_hash(fork)
assert hash1 == hash3
@@ -74,7 +74,7 @@ def test_pre_alloc_group_custom_salt():
mock_request1.node.get_closest_marker = Mock(return_value=mock_marker1)
test1 = MockTest(pre=pre, genesis_environment=env, request=mock_request1)
- hash1 = test1.compute_shared_pre_alloc_hash(fork)
+ hash1 = test1.compute_pre_alloc_group_hash(fork)
# Create another test with same custom group "eip1234"
mock_request2 = Mock()
@@ -85,7 +85,7 @@ def test_pre_alloc_group_custom_salt():
mock_request2.node.get_closest_marker = Mock(return_value=mock_marker2)
test2 = MockTest(pre=pre, genesis_environment=env, request=mock_request2)
- hash2 = test2.compute_shared_pre_alloc_hash(fork)
+ hash2 = test2.compute_pre_alloc_group_hash(fork)
# Hashes should be the same - both in "eip1234" group
assert hash1 == hash2
@@ -99,7 +99,7 @@ def test_pre_alloc_group_custom_salt():
mock_request3.node.get_closest_marker = Mock(return_value=mock_marker3)
test3 = MockTest(pre=pre, genesis_environment=env, request=mock_request3)
- hash3 = test3.compute_shared_pre_alloc_hash(fork)
+ hash3 = test3.compute_pre_alloc_group_hash(fork)
# Hash should be different - different custom group
assert hash1 != hash3
@@ -121,7 +121,7 @@ def test_pre_alloc_group_separate_different_nodeids():
mock_request1.node.get_closest_marker = Mock(return_value=mock_marker1)
test1 = MockTest(pre=pre, genesis_environment=env, request=mock_request1)
- hash1 = test1.compute_shared_pre_alloc_hash(fork)
+ hash1 = test1.compute_pre_alloc_group_hash(fork)
# Create test with "separate" and nodeid2
mock_request2 = Mock()
@@ -132,7 +132,7 @@ def test_pre_alloc_group_separate_different_nodeids():
mock_request2.node.get_closest_marker = Mock(return_value=mock_marker2)
test2 = MockTest(pre=pre, genesis_environment=env, request=mock_request2)
- hash2 = test2.compute_shared_pre_alloc_hash(fork)
+ hash2 = test2.compute_pre_alloc_group_hash(fork)
# Hashes should be different due to different nodeids
assert hash1 != hash2
@@ -151,11 +151,11 @@ def test_no_pre_alloc_group_marker():
mock_request.node.get_closest_marker = Mock(return_value=None) # No marker
test1 = MockTest(pre=pre, genesis_environment=env, request=mock_request)
- hash1 = test1.compute_shared_pre_alloc_hash(fork)
+ hash1 = test1.compute_pre_alloc_group_hash(fork)
# Create test without any request
test2 = MockTest(pre=pre, genesis_environment=env)
- hash2 = test2.compute_shared_pre_alloc_hash(fork)
+ hash2 = test2.compute_pre_alloc_group_hash(fork)
# Hashes should be the same - both have no marker
assert hash1 == hash2
@@ -177,7 +177,7 @@ def test_pre_alloc_group_with_reason():
mock_request1.node.get_closest_marker = Mock(return_value=mock_marker1)
test1 = MockTest(pre=pre, genesis_environment=env, request=mock_request1)
- hash1 = test1.compute_shared_pre_alloc_hash(fork)
+ hash1 = test1.compute_pre_alloc_group_hash(fork)
# Create another test with same group but different reason
mock_request2 = Mock()
@@ -189,7 +189,7 @@ def test_pre_alloc_group_with_reason():
mock_request2.node.get_closest_marker = Mock(return_value=mock_marker2)
test2 = MockTest(pre=pre, genesis_environment=env, request=mock_request2)
- hash2 = test2.compute_shared_pre_alloc_hash(fork)
+ hash2 = test2.compute_pre_alloc_group_hash(fork)
# Hashes should be the same - reason doesn't affect grouping
assert hash1 == hash2
diff --git a/src/pytest_plugins/forks/tests/test_forks.py b/src/pytest_plugins/forks/tests/test_forks.py
index 899c39dad74..fbd686dbe1f 100644
--- a/src/pytest_plugins/forks/tests/test_forks.py
+++ b/src/pytest_plugins/forks/tests/test_forks.py
@@ -45,7 +45,7 @@ def test_all_forks({StateTest.pytest_parameter_name()}):
fixture_format_label = fixture_format.format_name.lower()
if (
not fixture_format.supports_fork(fork)
- or "blockchain_test_engine_reorg" in fixture_format_label
+ or "blockchain_test_engine_x" in fixture_format_label
):
expected_passed -= 1
assert f":test_all_forks[fork_{fork}-{fixture_format_label}]" not in stdout
@@ -91,7 +91,7 @@ def test_all_forks({StateTest.pytest_parameter_name()}):
fixture_format_label = fixture_format.format_name.lower()
if (
not fixture_format.supports_fork(fork)
- or "blockchain_test_engine_reorg" in fixture_format_label
+ or "blockchain_test_engine_x" in fixture_format_label
):
expected_passed -= 1
assert f":test_all_forks[fork_{fork}-{fixture_format_label}]" not in stdout
@@ -137,7 +137,7 @@ def test_all_forks({StateTest.pytest_parameter_name()}):
fixture_format_label = fixture_format.format_name.lower()
if (
not fixture_format.supports_fork(fork)
- or "blockchain_test_engine_reorg" in fixture_format_label
+ or "blockchain_test_engine_x" in fixture_format_label
):
expected_passed -= 1
assert f":test_all_forks[fork_{fork}-{fixture_format_label}]" not in stdout
@@ -178,7 +178,7 @@ def test_all_forks({StateTest.pytest_parameter_name()}):
fixture_format = fixture_format.format
else:
fixture_format_label = fixture_format.format_name.lower()
- if "blockchain_test_engine_reorg" in fixture_format_label:
+ if "blockchain_test_engine_x" in fixture_format_label:
expected_passed -= 1
assert f":test_all_forks[fork_{fork}-{fixture_format_label}]" not in stdout
continue
diff --git a/uv.lock b/uv.lock
index c31a4ad4c92..8ae8726f9ad 100644
--- a/uv.lock
+++ b/uv.lock
@@ -578,7 +578,7 @@ requires-dist = [
{ name = "ethereum-types", specifier = ">=0.2.1,<0.3" },
{ name = "filelock", specifier = ">=3.15.1,<4" },
{ name = "gitpython", specifier = ">=3.1.31,<4" },
- { name = "hive-py", git = "https://github.com/marioevz/hive.py" },
+ { name = "hive-py", git = "https://github.com/marioevz/hive.py?rev=582703e2f94b4d5e61ae495d90d684852c87a580" },
{ name = "joblib", specifier = ">=1.4.2" },
{ name = "mike", marker = "extra == 'docs'", specifier = ">=1.1.2,<2" },
{ name = "mkdocs", marker = "extra == 'docs'", specifier = ">=1.4.3,<2" },
@@ -750,7 +750,7 @@ wheels = [
[[package]]
name = "hive-py"
version = "0.1.0"
-source = { git = "https://github.com/marioevz/hive.py#8874cb30904b00098bb6b696b2fd3c0f5a12e119" }
+source = { git = "https://github.com/marioevz/hive.py?rev=582703e2f94b4d5e61ae495d90d684852c87a580#582703e2f94b4d5e61ae495d90d684852c87a580" }
dependencies = [
{ name = "requests" },
]
diff --git a/whitelist.txt b/whitelist.txt
index 86e8eeaa72d..00628eaaa3e 100644
--- a/whitelist.txt
+++ b/whitelist.txt
@@ -1061,6 +1061,7 @@ Typechecking
groupstats
SharedPreStateGroup
zkEVMs
+enginex
qube
aspell
codespell