ENOMAD is a flexible evolutionary optimizer that couples global search (crossover + mutation) with local, derivative‑free refinement via the NOMAD solver, exposed through PyNomad.
Why ENOMAD?
Derivative‑free continuous control – no action discretisation, no back‑prop through time; handles dense recurrence gracefully.
Prior‑aware search – starts from a biological connectome (or any strong prior).
- EA mode: many tiny edits → minimal drift.
- rEA mode: < 50 edits → structural fidelity.
Embarrassingly parallel – flip on Ray to scale linearly with CPU cores.
pip install ENOMAD
# Development version (clone + editable install)
git clone https://github.com/dsb-lab/ENOMAD
cd ENOMAD
pip install -e .[ray] # optional: Ray for parallel NOMAD callsRequirements: Python >= 3.9 · NumPy >= 1.23 · tqdm · PyNomad >= 0.9 · (optional: ray)
Try ENOMAD instantly in your browser:
import numpy as np
from ENOMAD import ENOMAD
obj = lambda x: -np.sum(x**2) # maximise => global optimum at x = 0
opt = ENOMAD(
"EA", # or "hybrid"
population_size=32,
dimension=10,
objective_fn=obj,
subset_size=5,
bounds=0.2,
max_bb_eval=100,
n_mutate_coords=2,
)
best_x, best_fit = opt.run(generations=100)
print(f"Best fitness: {best_fit:.4f}")ENOMAD(
optimizer_type: Literal["EA", "rEA"],
population_size: int,
dimension: int,
objective_fn: Callable[[np.ndarray], float],
subset_size: int = 20,
bounds: float = 0.1,
max_bb_eval: int = 200,
n_elites: int | None = None,
n_mutate_coords: int = 5,
crossover_rate: float = 0.5,
crossover_type: Literal["uniform", "fitness"] = "uniform",
crossover_exponent: float = 1.0,
init_pop: np.ndarray | None = None,
init_vec: np.ndarray | None = None,
low: float = -1.0,
high: float = 1.0,
use_ray: bool | None = None,
seed: int | None = None,
)ENOMAD offers two training strategies that differ only in when and how NOMAD is invoked within the evolutionary loop.
Every generation, each individual in the population is passed to NOMAD for local refinement:
- Slice selection – Pick
subset_sizecoordinates at random (≤ 49, per NOMAD’s convergence guarantees). - Local search – Run PyNomad with a ±
boundshyper‑rectangle around that slice and a budget ofmax_bb_evalevaluations. - Replacement – If the refined individual improves its fitness, it replaces the original.
- Reproduction – Select the top
n_elitesby fitness, then fill the rest of the population via fitness‑proportional crossover (probabilitycrossover_rate) followed by random‑reset mutation (n_mutate_coordscoordinates).
EA mode tends to make many small synaptic adjustments, keeping the overall L2 distance to the original connectome low while steadily improving reward.
An evolutionary mutation proposes a sparse change‑set first; NOMAD then fine‑tunes only those altered weights:
- Mutation – Each offspring mutates a random subset of weights (usually < 50).
- Targeted NOMAD – If the diff mask is novel and < 50 coords, run PyNomad only on that mask.
- Evaluation & elitism – Update fitness, retain best individuals, proceed with crossover/mutation.
rEA mode yields comparable rewards to EA mode while changing far fewer synapses – ideal when biological plausibility demands minimal rewiring.
Key hyper‑parameters (shared):
| name | effect |
|---|---|
subset_size |
# parameters NOMAD refines per call (≤ 49) |
bounds |
half‑width of the NOMAD search box |
max_bb_eval |
NOMAD evaluations per call |
n_mutate_coords |
coordinates reset per mutation |
pip install -e .[dev] # includes pytest, ruff, black, etc.
pytest -q # run smoke + reproducibility tests- Fork + create a feature branch
- Run
pre-commit install - Add unit tests for new behavior
- PR + short summary of the change
MIT License — see LICENSE file
- Hi my name is miles, I hope you enjoy these algorithms and optimize some cool stuff using them <3
- PyNomad
- NOMAD team at Polytechnique Montréal / GERADmiddlemouse