feat: add component-level status and piecewise conversion#99
feat: add component-level status and piecewise conversion#99
Conversation
Add ConversionCurve for piecewise-linear conversion using linopy's piecewise API, and component-level Status for Storage and Converter. - ConversionCurve dataclass with breakpoints, status, mandatory, availability - Converter.conversion field (union with conversion_factors) - Storage.status field for component-level on/off - _ComponentStatusArrays + PiecewiseData in model_data (fully serializable) - Component status variables/constraints reusing add_switch_transitions/add_duration_tracking - Piecewise constraints via linopy.piecewise() with automatic SOS reformulation - Effect contributions updated for component-level status costs - 18 new passing tests across test_components, test_piecewise, test_combinations Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds piecewise conversion and investment datatypes (ConversionCurve, PiecewiseSizing, PiecewiseInvestment), component-level Status, and wires piecewise converters and component-status through data building, model constraints, result extraction, and contribution attribution with validation and netCDF support. Changes
Sequence Diagram(s)sequenceDiagram
participant Author as Model Author
participant ModelBuilder as ModelData Builder
participant Classifier as Converter Classifier
participant PiecewiseData as PiecewiseData Builder
participant LinearData as ConvertersData Builder
participant FlowSystem as FlowSystem (Model)
participant Solver as Solver
Author->>ModelBuilder: supply elements (flows, converters, storages)
ModelBuilder->>Classifier: split converters (ConversionCurve vs linear)
Classifier->>LinearData: build linear converters
Classifier->>PiecewiseData: build piecewise converters (validate bps, ref_flow, sizing)
PiecewiseData-->>ModelBuilder: piecewise metadata (breakpoints, effects, availability)
ModelBuilder->>FlowSystem: provide ConvertersData + PiecewiseData + status items
FlowSystem->>FlowSystem: create piecewise vars & component-status vars
FlowSystem->>FlowSystem: apply piecewise constraints & status gating (linopy.piecewise)
FlowSystem->>Solver: solve model
Solver-->>FlowSystem: solution
FlowSystem->>Author: results (including component & pw_invest variables)
sequenceDiagram
participant FlowSystem as FlowSystem
participant StatusVars as Component Status Vars
participant Piecewise as Piecewise Constraints
participant Contributions as Contributions Engine
participant Results as Result Extraction
FlowSystem->>StatusVars: instantiate component_on/startup/shutdown
FlowSystem->>Piecewise: instantiate pw_invest / pw_active / sizing vars
FlowSystem->>Piecewise: apply piecewise mapping & availability gates
FlowSystem->>Contributions: compute temporal, periodic, once contributions (include pw & status)
FlowSystem->>Results: from_model() extracts flows, component vars, pw_invest vars
Results-->>FlowSystem: dataset with new variables
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is
🚀 New features to boost your workflow:
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (4)
src/fluxopt/contributions.py (1)
150-179: Consider documenting the "first governed flow" attribution design choice.Component-level status costs are attributed entirely to the first governed flow (e.g., charging flow for storage, first input for converters). This is a valid accounting choice but could be non-obvious to users expecting distributed attribution.
Consider adding a docstring note or inline comment explaining this convention:
# Component-level costs are attributed to the first governed flow for simplicity. # This affects cost breakdown reporting but not optimization results.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/fluxopt/contributions.py` around lines 150 - 179, Add an inline comment or short docstring near the component-level attribution blocks to explain the design choice that component-level status costs are fully assigned to the "first governed flow"; e.g., update the area around cstatus_effects_running / cstatus_effects_startup and the usage of cstatus_governed_flows, solution keys 'component--on' and 'component--startup', and the temporal_flow accumulation to state that this is a deliberate simplification (attributing costs entirely to the first governed flow such as charging flow for storage or first input for converters) and note that it affects reporting/attribution but not the optimization results.src/fluxopt/model_data.py (1)
1521-1528: Clarify reference flow selection logic.The reference flow is selected as the first flow from
(*conv.inputs, *conv.outputs)that appears in breakpoints. The comment says "first input flow" but if no input is in breakpoints, an output will be selected. Consider updating the comment to reflect the actual behavior:- # Reference flow is first input flow that appears in breakpoints + # Reference flow is first flow (inputs checked first, then outputs) that appears in breakpointsThis is minor since the current behavior is correct.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/fluxopt/model_data.py` around lines 1521 - 1528, The comment claiming "Reference flow is first input flow that appears in breakpoints" is inaccurate because the loop over (*conv.inputs, *conv.outputs) will pick an output if no input matches; update the comment above the bp_flow_set/ref_fid logic to state that the reference flow is the first flow found in the ordered tuple of conv.inputs followed by conv.outputs that appears in curve.breakpoints (i.e., prefer inputs but fall back to outputs), and keep the existing code using bp_flow_set, conv, curve, ref_fid, and ref_flows unchanged.tests/math_port/test_components.py (1)
20-26: Align updated public test docstrings with Google style.These method docstrings are verbose narrative blocks and do not include an
Argssection foroptimize. Please normalize format per repo rules.As per coding guidelines, "
**/*.py: Use Google style docstrings: brief, on public functions only, withArgssection when there are parameters;Returns/Raisesonly when non-obvious. Do not include types in docstrings."Also applies to: 59-67, 105-110, 143-147, 180-184, 217-221
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/math_port/test_components.py` around lines 20 - 26, Several public test docstrings (including the one describing startup-cost behavior) are long narrative blocks and must be converted to concise Google-style docstrings: replace the verbose multi-line narrative with a one-line summary and add an Args section documenting the optimize parameter (e.g., "Args: optimize: optimizer fixture used to run the model."), removing type annotations and unnecessary details; apply the same change to the other test docstrings referenced (around the other ranges) so all public test functions use brief Google-style docstrings that mention Args only for the optimize parameter.tests/math_port/test_piecewise.py (1)
17-25: Convert public test method docstrings to Google style (or keep them brief).These updated method docstrings are narrative-style and omit an
Argssection for theoptimizeparameter. Please align them with the repository docstring format rule for public functions.As per coding guidelines, "
**/*.py: Use Google style docstrings: brief, on public functions only, withArgssection when there are parameters;Returns/Raisesonly when non-obvious. Do not include types in docstrings."Also applies to: 51-55, 81-86, 116-122, 150-159
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/math_port/test_piecewise.py` around lines 17 - 25, Replace the narrative multi-line docstrings on the public test functions in tests/math_port/test_piecewise.py (the test_... functions covering the ranges mentioned) with brief Google-style docstrings: a one-line summary, and an Args section documenting the optimize parameter (e.g., Args: optimize: brief description), removing narrative details and any types; do the same for the other affected test functions at the noted ranges (51-55, 81-86, 116-122, 150-159).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/fluxopt/model.py`:
- Around line 876-886: The loop over time in the piecewise constraint creation
should be removed and replaced by a vectorized linopy call: pass ref_bps and bps
with their time dimension intact (do not convert to lists), use flow_rate_ref
(keep its time dim) and self.flow_rate.sel(flow=fid) (keep its time dim) as
x_var/y_var, and pass active_var directly if present; then build the descriptor
with linopy_piecewise(x_var, x_points=ref_bps, y_points=bps, active=active_var)
== y_var and call self.m.add_piecewise_constraints(descriptor,
name=f'pw_{conv_id}_{fid}') so constraints are broadcast over time instead of
looping over t_idx.
In `@tests/math_port/test_piecewise.py`:
- Line 156: Update the incorrect efficiency value in the test docstring: in
tests/math_port/test_piecewise.py replace the eta shown for segment 2 from 0.7
to 0.5 (the line currently reading "demand=70 → operate at BP2 (eta=0.7).
Gas=100, cost=100.") because with Gas=[0,50,100] and Heat=[0,45,70] the segment
efficiency is (70-45)/(100-50)=0.5; adjust only the docstring/text to reflect
eta=0.5.
---
Nitpick comments:
In `@src/fluxopt/contributions.py`:
- Around line 150-179: Add an inline comment or short docstring near the
component-level attribution blocks to explain the design choice that
component-level status costs are fully assigned to the "first governed flow";
e.g., update the area around cstatus_effects_running / cstatus_effects_startup
and the usage of cstatus_governed_flows, solution keys 'component--on' and
'component--startup', and the temporal_flow accumulation to state that this is a
deliberate simplification (attributing costs entirely to the first governed flow
such as charging flow for storage or first input for converters) and note that
it affects reporting/attribution but not the optimization results.
In `@src/fluxopt/model_data.py`:
- Around line 1521-1528: The comment claiming "Reference flow is first input
flow that appears in breakpoints" is inaccurate because the loop over
(*conv.inputs, *conv.outputs) will pick an output if no input matches; update
the comment above the bp_flow_set/ref_fid logic to state that the reference flow
is the first flow found in the ordered tuple of conv.inputs followed by
conv.outputs that appears in curve.breakpoints (i.e., prefer inputs but fall
back to outputs), and keep the existing code using bp_flow_set, conv, curve,
ref_fid, and ref_flows unchanged.
In `@tests/math_port/test_components.py`:
- Around line 20-26: Several public test docstrings (including the one
describing startup-cost behavior) are long narrative blocks and must be
converted to concise Google-style docstrings: replace the verbose multi-line
narrative with a one-line summary and add an Args section documenting the
optimize parameter (e.g., "Args: optimize: optimizer fixture used to run the
model."), removing type annotations and unnecessary details; apply the same
change to the other test docstrings referenced (around the other ranges) so all
public test functions use brief Google-style docstrings that mention Args only
for the optimize parameter.
In `@tests/math_port/test_piecewise.py`:
- Around line 17-25: Replace the narrative multi-line docstrings on the public
test functions in tests/math_port/test_piecewise.py (the test_... functions
covering the ranges mentioned) with brief Google-style docstrings: a one-line
summary, and an Args section documenting the optimize parameter (e.g., Args:
optimize: brief description), removing narrative details and any types; do the
same for the other affected test functions at the noted ranges (51-55, 81-86,
116-122, 150-159).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a6f71b1a-6cb3-4e15-8d6e-a71825d76bf1
📒 Files selected for processing (10)
src/fluxopt/__init__.pysrc/fluxopt/components.pysrc/fluxopt/contributions.pysrc/fluxopt/elements.pysrc/fluxopt/model.pysrc/fluxopt/model_data.pysrc/fluxopt/results.pytests/math_port/test_combinations.pytests/math_port/test_components.pytests/math_port/test_piecewise.py
| n_time = len(self.data.dims.time) | ||
| for t_idx in range(n_time): | ||
| time_val = self.data.dims.time.values[t_idx] | ||
| x_pts = ref_bps.isel(time=t_idx).values.tolist() | ||
| y_pts = bps.isel(time=t_idx).values.tolist() | ||
| x_var = flow_rate_ref.sel(time=time_val) | ||
| y_var = self.flow_rate.sel(flow=fid, time=time_val) | ||
|
|
||
| active_t = active_var.sel(time=time_val) if active_var is not None else None | ||
| descriptor = linopy_piecewise(x_var, x_points=x_pts, y_points=y_pts, active=active_t) == y_var | ||
| self.m.add_piecewise_constraints(descriptor, name=f'pw_{conv_id}_{fid}_t{t_idx}') |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does linopy piecewise function support vectorized DataArray inputs?
💡 Result:
Yes—for the breakpoint inputs. linopy.piecewise(expr, x_points, y_points) accepts x_points/y_points as “BreaksLike”, and Linopy’s breakpoints() factory explicitly supports passing an xarray.DataArray (it’s returned as-is). (linopy.readthedocs.io) Linopy also auto-broadcasts breakpoints to match extra dimensions on your variables/expressions (so you can keep things vectorized across, e.g., time, generator, etc.). (linopy.readthedocs.io)
But the left-hand side expr argument itself must be a Linopy Variable or LinearExpression—not a raw xarray.DataArray. (linopy.readthedocs.io)
Citations:
- 1: https://linopy.readthedocs.io/en/latest/piecewise-linear-constraints.html
- 2: https://linopy.readthedocs.io/en/latest/piecewise-linear-constraints.html
- 3: https://linopy.readthedocs.io/en/latest/piecewise-linear-constraints.html
🏁 Script executed:
sed -n '841,901p' src/fluxopt/model.pyRepository: FBumann/fluxopt
Length of output: 3075
Vectorize piecewise constraints using linopy's broadcasting.
Lines 876–886 loop over time indices and convert breakpoints to lists, which violates the guideline to use "concise, vectorized linopy syntax without loops over coordinates." linopy's piecewise() API auto-broadcasts breakpoint arrays across dimensions like time—you can pass ref_bps and bps directly with their time dimension intact.
Remove the time loop and pass the full DataArrays:
x_pts = ref_bps # Keep shape (breakpoint, time)
y_pts = bps # Keep shape (breakpoint, time)
x_var = flow_rate_ref # Already indexed by flow
y_var = self.flow_rate.sel(flow=fid) # Keep time dimension
active_t = active_var # Already has time if present
descriptor = linopy_piecewise(x_var, x_points=x_pts, y_points=y_pts, active=active_t) == y_var
self.m.add_piecewise_constraints(descriptor, name=f'pw_{conv_id}_{fid}')This reduces constraint addition overhead from O(n_time) to O(1) per flow and aligns with the coding guidelines.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/fluxopt/model.py` around lines 876 - 886, The loop over time in the
piecewise constraint creation should be removed and replaced by a vectorized
linopy call: pass ref_bps and bps with their time dimension intact (do not
convert to lists), use flow_rate_ref (keep its time dim) and
self.flow_rate.sel(flow=fid) (keep its time dim) as x_var/y_var, and pass
active_var directly if present; then build the descriptor with
linopy_piecewise(x_var, x_points=ref_bps, y_points=bps, active=active_var) ==
y_var and call self.m.add_piecewise_constraints(descriptor,
name=f'pw_{conv_id}_{fid}') so constraints are broadcast over time instead of
looping over t_idx.
| def test_component_status_min_uptime(self, optimize): | ||
| """Proves: min_uptime on component level forces the entire component | ||
| to stay on for consecutive hours.""" | ||
| to stay on for consecutive hours. | ||
|
|
||
| @pytest.mark.skip(reason='component-level status not supported in fluxopt') | ||
| Boiler with piecewise conversion, min_uptime=3. | ||
| demand=[20]*6 with backup at 0.5 efficiency. | ||
| Boiler must stay on for ≥3h once started, even if turning off | ||
| would save fuel (waste heat cost). We check internal on-blocks | ||
| (excluding terminal ones which can be shorter due to horizon edge). | ||
| """ | ||
| result = optimize( | ||
| timesteps=ts(6), | ||
| carriers=[Carrier('Gas'), Carrier('Heat')], | ||
| effects=[Effect('cost', is_objective=True)], | ||
| ports=[ | ||
| Port( | ||
| 'Demand', | ||
| exports=[ | ||
| Flow('Heat', size=1, fixed_relative_profile=np.array([20, 20, 20, 20, 20, 20])), | ||
| ], | ||
| ), | ||
| Port('GasSrc', imports=[Flow('Gas', effects_per_flow_hour={'cost': 1})]), | ||
| Port('Waste', exports=[Flow('Heat', size=1000)]), | ||
| ], | ||
| converters=[ | ||
| Converter( | ||
| 'Boiler', | ||
| inputs=[Flow('Gas', size=100)], | ||
| outputs=[Flow('Heat', size=100)], | ||
| conversion=ConversionCurve( | ||
| breakpoints={'Gas': [0, 100], 'Heat': [0, 100]}, | ||
| status=Status(min_uptime=3), | ||
| ), | ||
| ), | ||
| ], | ||
| ) | ||
| on = result.solution['component--on'].sel(component='Boiler').values | ||
| # With min_uptime=3 and constant demand, the boiler should stay on for ≥3h | ||
| # Check that total on-time is ≥3 (at least one block of 3) | ||
| total_on = sum(1 for v in on if v > 0.5) | ||
| assert total_on >= 3, f'Expected ≥3 on-hours, got {total_on}: on={on}' | ||
|
|
There was a problem hiding this comment.
test_component_status_min_uptime is non-binding for min_uptime.
With constant demand and no alternative heat source, the boiler is forced on regardless of Status(min_uptime=3). The current assertion (total_on >= 3) can pass even if min-uptime constraints are broken.
Please redesign this case so turning off would otherwise be feasible (e.g., alternating demand + backup source), then assert contiguous on-block length constraints directly.
…rters (#100) ## Summary - Add `PiecewiseSizing` and `PiecewiseInvestment` types for piecewise converter sizing/build-timing - Refactor `ConversionCurve`: replace `mandatory`/`effects_fixed` with composable `size` field - Full investment pipeline for piecewise converters (`pw_invest_*` variables, constraints, effects) - Gate piecewise by `invest_active`, combine with `component_on` when both present - Update effect contributions for piecewise investment/sizing costs ## Test plan - [x] All 438 existing tests pass (0 failures) - [x] New tests: piecewise+investment mandatory, optional, and with status - [x] Save/reload/validate roundtrip passes for all new tests - [x] mypy strict passes - [x] ruff check/format clean 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (1)
tests/math_port/test_combinations.py (1)
68-76: Make this an exact regression assertion.
assert total.sum() > 1000is too loose to catch wrong build timing or recurring piecewise per-size charges leaking into inactive periods. The expected total is derivable from this fixture, soassert_allclose(...)would protect the new accounting path much better.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/math_port/test_combinations.py` around lines 68 - 76, Replace the loose inequality assertion with an exact regression check: compute the expected total (e.g. 1333.0 based on the fixture) and use numpy.testing.assert_allclose to compare result.effect_totals.sel(effect='cost').values.sum() to that expected value with a tight tolerance (e.g. rtol=1e-6); keep the existing mandatory-build check for pw_invest--active (pw_active) unchanged. Ensure you import numpy.testing.assert_allclose (or alias) if not already present and reference the same symbols result.effect_totals and pw_invest--active/pw_active when updating the assertions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/fluxopt/components.py`:
- Around line 68-78: The current validation only checks for unknown breakpoint
keys, allowing breakpoints to omit some converter flows which causes those flows
to skip PiecewiseData.build() and _create_piecewise_constraints(); update the
validation in the converter handling (where self.conversion, conversion_factors,
conversion.breakpoints and self._short_to_id are used) to require exact key
equality (i.e., ensure set(self.conversion.breakpoints.keys()) ==
set(self._short_to_id.keys())) and raise a clear ValueError referencing self.id
when keys are missing or extra so no flow is left without a conversion curve.
In `@src/fluxopt/contributions.py`:
- Around line 113-118: _pw_converter_first_flow currently picks the first
pair_flow where pw.pair_converter == conv_id, which can mis-assign piecewise
attribution; change it to first look up the converter's stored reference flow
(pw.ref_flow where pw.pair_converter == conv_id) and return that as the flow id
(converted to str) if present/non-null, and only fall back to the existing
behavior of returning the first pw.pair_flow match when ref_flow is missing;
update the function _pw_converter_first_flow to use pw.ref_flow.values[...] for
the matching pw.pair_converter entries and otherwise use pw.pair_flow as before.
- Around line 324-351: The build-time (one-time) investment terms are being
added into the periodic accumulator (periodic) which causes them to be swept
through the periodic Leontief pass; instead, stop merging
invest_effects_per_size and invest_effects_fixed into periodic and add their
computed terms to the one-time effects accumulator used for build-time
contributions (the same place the solver stores 'effect--once'), i.e. locate the
blocks referencing data.flows.invest_effects_per_size / invest_effects_fixed and
solution keys 'invest--size_at_build' and 'invest--build' and change their
target from periodic to the one-time effects variable (and keep the
periodic-periodic and periodic_fixed_periodic blocks as-is); ensure any
renaming/reindexing logic (term.rename({'flow':'contributor'}).reindex(...)) is
preserved when adding into the one-time effects so these build costs are
excluded from the later periodic Leontief step.
In `@src/fluxopt/model_data.py`:
- Around line 1594-1607: When building the synthetic Investment for piecewise
curves (inside the isinstance(curve.size, PiecewiseInvestment) branch), ensure
prior_size cannot under-report compared to the implicit envelope by replacing
prior_size=pi.prior_size with prior_size=max(pi.prior_size, max_bp) (so the
synthetic Investment's prior_size is at least max_bp); update the synth_inv
construction (used to append to invest_items with conv.id) to use that validated
prior_size while keeping the other fields the same.
In `@src/fluxopt/model.py`:
- Around line 888-915: The loop in _constrain_flow_rates_component_status
currently ignores governed flows whose ds.size is not a finite scalar
(Sizing/Investment/implicit capacity), so component_on is not applied; fix by
deriving a size expression for each flow when float(ds.size) is NaN instead of
skipping: for each fid in _constrained_flow_rates_component_status, compute
size_expr = ds.size if finite else the model's size/ capacity variable for that
flow (e.g., the sizing/investment variable used elsewhere in the model or a
helper like a flow size expression function), then use size_expr in the
constraints (self.m.add_constraints(self.flow_rate.sel(flow=fid) >= size_expr *
rl * on, etc.) and similarly for profile case (fr == size_expr * fp * on) so
component_on correctly gates both fixed and non-fixed-size governed flows.
- Around line 1281-1288: The code charges recurring per-size piecewise costs
against period-independent pw_invest_size causing costs in inactive periods;
replace the use of self.pw_invest_size when computing periodic_direct with a
linearized active-size variable (e.g., pw_invest_active_size) that represents
size weighted by pw_invest_active over time. Concretely, ensure a value/variable
self.pw_invest_active_size exists (or compute it as the properly linearized
product of self.pw_invest_size and self.pw_invest_active in the model setup),
add an assertion like assert self.pw_invest_active_size is not None before use,
and use (eps_p * self.pw_invest_active_size).sum('pw_converter') instead of
(eps_p * self.pw_invest_size).sum('pw_converter') when updating periodic_direct
for invest_effects_per_size_periodic.
- Around line 765-768: The constraint forbids any new build for converters with
has_prior for all periods; change it so the no-build restriction only applies to
periods when the inherited (prior) capacity is still active. Replace the
unconditional constraint added in the has_prior branch (the
self.m.add_constraints call that uses build.sel(pw_converter=cid).sum('period')
== 0 and name 'pw_invest_no_build_prior_{cid}') with one that sums/limits build
only over the subset of periods where the prior exists (e.g., sum over
prior_active_periods or loop per-period and add the constraint for periods <=
prior_lifetime), so that rebuilding is allowed after the prior lifetime expires.
---
Nitpick comments:
In `@tests/math_port/test_combinations.py`:
- Around line 68-76: Replace the loose inequality assertion with an exact
regression check: compute the expected total (e.g. 1333.0 based on the fixture)
and use numpy.testing.assert_allclose to compare
result.effect_totals.sel(effect='cost').values.sum() to that expected value with
a tight tolerance (e.g. rtol=1e-6); keep the existing mandatory-build check for
pw_invest--active (pw_active) unchanged. Ensure you import
numpy.testing.assert_allclose (or alias) if not already present and reference
the same symbols result.effect_totals and pw_invest--active/pw_active when
updating the assertions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a25d42db-ec12-440f-a528-eaf474569281
📒 Files selected for processing (9)
src/fluxopt/__init__.pysrc/fluxopt/components.pysrc/fluxopt/contributions.pysrc/fluxopt/elements.pysrc/fluxopt/model.pysrc/fluxopt/model_data.pysrc/fluxopt/results.pytests/math_port/test_combinations.pytests/math_port/test_piecewise.py
🚧 Files skipped from review as they are similar to previous changes (2)
- src/fluxopt/results.py
- src/fluxopt/init.py
| if isinstance(curve.size, PiecewiseInvestment): | ||
| pi = curve.size | ||
| synth_inv = Investment( | ||
| min_size=max_bp, | ||
| max_size=max_bp, | ||
| mandatory=pi.mandatory, | ||
| lifetime=pi.lifetime, | ||
| prior_size=pi.prior_size, | ||
| effects_per_size=pi.effects_per_size, | ||
| effects_fixed=pi.effects_fixed, | ||
| effects_per_size_periodic=pi.effects_per_size_periodic, | ||
| effects_fixed_periodic=pi.effects_fixed_periodic, | ||
| ) | ||
| invest_items.append((conv.id, synth_inv)) |
There was a problem hiding this comment.
Validate prior_size against the curve’s implicit size.
The piecewise path still uses the full breakpoint envelope, but this copies any prior_size into the synthetic Investment. If prior_size < max_bp, the model can dispatch the full curve while reporting/paying for a smaller existing capacity.
🛠️ Proposed fix
if isinstance(curve.size, PiecewiseInvestment):
pi = curve.size
+ if pi.prior_size > 0 and not np.isclose(pi.prior_size, max_bp):
+ msg = (
+ f'PiecewiseInvestment on {conv.id!r}: prior_size must be 0 or equal to '
+ f'the implicit curve size {max_bp}, got {pi.prior_size}'
+ )
+ raise ValueError(msg)
synth_inv = Investment(
min_size=max_bp,
max_size=max_bp,
mandatory=pi.mandatory,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/fluxopt/model_data.py` around lines 1594 - 1607, When building the
synthetic Investment for piecewise curves (inside the isinstance(curve.size,
PiecewiseInvestment) branch), ensure prior_size cannot under-report compared to
the implicit envelope by replacing prior_size=pi.prior_size with
prior_size=max(pi.prior_size, max_bp) (so the synthetic Investment's prior_size
is at least max_bp); update the synth_inv construction (used to append to
invest_items with conv.id) to use that validated prior_size while keeping the
other fields the same.
| if has_prior: | ||
| self.m.add_constraints( | ||
| build.sel(pw_converter=cid).sum('period') == 0, name=f'pw_invest_no_build_prior_{cid}' | ||
| ) |
There was a problem hiding this comment.
Allow rebuild after a finite-lifetime prior asset expires.
This forbids any build whenever prior_size > 0. With a finite lifetime, the active logic above already lets the inherited capacity expire, so the converter becomes unavailable forever instead of being replaceable in a later period.
🛠️ Proposed fix
- if has_prior:
+ if has_prior and lt_int is None:
self.m.add_constraints(
build.sel(pw_converter=cid).sum('period') == 0, name=f'pw_invest_no_build_prior_{cid}'
)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/fluxopt/model.py` around lines 765 - 768, The constraint forbids any new
build for converters with has_prior for all periods; change it so the no-build
restriction only applies to periods when the inherited (prior) capacity is still
active. Replace the unconditional constraint added in the has_prior branch (the
self.m.add_constraints call that uses build.sel(pw_converter=cid).sum('period')
== 0 and name 'pw_invest_no_build_prior_{cid}') with one that sums/limits build
only over the subset of periods where the prior exists (e.g., sum over
prior_active_periods or loop per-period and add the constraint for periods <=
prior_lifetime), so that rebuilding is allowed after the prior lifetime expires.
| def _constrain_flow_rates_component_status(self) -> None: | ||
| """Apply semi-continuous flow rate bounds for flows governed by component-level status. | ||
|
|
||
| P_f <= size * rel_ub * component_on[c] | ||
| P_f >= size * rel_lb * component_on[c] | ||
| """ | ||
| if self.component_on is None: | ||
| return | ||
|
|
||
| ds = self.data.flows | ||
| gov_map = self._governed_flows_map() | ||
|
|
||
| for comp_id, flow_ids in gov_map.items(): | ||
| on = self.component_on.sel(component=comp_id) # (time[, period]) | ||
| for fid in flow_ids: | ||
| fr = self.flow_rate.sel(flow=fid) | ||
| bt = str(ds.bound_type.sel(flow=fid).values) | ||
| size = float(ds.size.sel(flow=fid).values) | ||
|
|
||
| if bt == 'bounded' and not np.isnan(size): | ||
| rl = ds.rel_lb.sel(flow=fid) | ||
| ru = ds.rel_ub.sel(flow=fid) | ||
| self.m.add_constraints(fr >= size * rl * on, name=f'flow_lb_cstatus_{fid}') | ||
| self.m.add_constraints(fr <= size * ru * on, name=f'flow_ub_cstatus_{fid}') | ||
| elif bt == 'profile' and not np.isnan(size): | ||
| fp = ds.fixed_profile.sel(flow=fid) | ||
| self.m.add_constraints(fr == size * fp * on, name=f'flow_fix_cstatus_{fid}') | ||
|
|
There was a problem hiding this comment.
Component status currently skips non-fixed-size governed flows.
This path only adds bounds when ds.size is a finite scalar. Governed flows using Sizing, Investment, or no size bypass component_on entirely, so Storage.status can be “off” while charge/discharge still uses the plain or sizing constraints.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/fluxopt/model.py` around lines 888 - 915, The loop in
_constrain_flow_rates_component_status currently ignores governed flows whose
ds.size is not a finite scalar (Sizing/Investment/implicit capacity), so
component_on is not applied; fix by deriving a size expression for each flow
when float(ds.size) is NaN instead of skipping: for each fid in
_constrained_flow_rates_component_status, compute size_expr = ds.size if finite
else the model's size/ capacity variable for that flow (e.g., the
sizing/investment variable used elsewhere in the model or a helper like a flow
size expression function), then use size_expr in the constraints
(self.m.add_constraints(self.flow_rate.sel(flow=fid) >= size_expr * rl * on,
etc.) and similarly for profile case (fr == size_expr * fp * on) so component_on
correctly gates both fixed and non-fixed-size governed flows.
| # Piecewise investment: recurring per-size costs → effect_periodic | ||
| if self.pw_invest_active is not None and d.piecewise is not None: | ||
| pw = d.piecewise | ||
| if pw.invest_effects_per_size_periodic is not None: | ||
| eps_p = pw.invest_effects_per_size_periodic.rename({'invest_pw_converter': 'pw_converter'}) | ||
| if (eps_p != 0).any(): | ||
| assert self.pw_invest_size is not None | ||
| periodic_direct = periodic_direct + (eps_p * self.pw_invest_size).sum('pw_converter') |
There was a problem hiding this comment.
Recurring piecewise per-size costs are charged outside active periods.
PiecewiseInvestment.effects_per_size_periodic is documented as recurring per active period, but this bills against the period-independent pw_invest_size. A unit built late, or one that expires via lifetime, still pays per-MW costs in periods where pw_invest_active is 0. This needs a linearized active-size variable, analogous to how regular investments use flow--size.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/fluxopt/model.py` around lines 1281 - 1288, The code charges recurring
per-size piecewise costs against period-independent pw_invest_size causing costs
in inactive periods; replace the use of self.pw_invest_size when computing
periodic_direct with a linearized active-size variable (e.g.,
pw_invest_active_size) that represents size weighted by pw_invest_active over
time. Concretely, ensure a value/variable self.pw_invest_active_size exists (or
compute it as the properly linearized product of self.pw_invest_size and
self.pw_invest_active in the model setup), add an assertion like assert
self.pw_invest_active_size is not None before use, and use (eps_p *
self.pw_invest_active_size).sum('pw_converter') instead of (eps_p *
self.pw_invest_size).sum('pw_converter') when updating periodic_direct for
invest_effects_per_size_periodic.
- Require ConversionCurve breakpoints to cover every flow (issubset → equality) - Use ref_flow for piecewise attribution instead of insertion-order pair_flow - Clarify build-once comment for prior investment constraint - Fix test_component_status_min_uptime to verify status variable integration - Fix docstring: eta=0.7 → eta=0.5 in varying efficiency test Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary - **Require ConversionCurve breakpoints to cover every flow** — `issubset` → equality check; curves that omit flows now raise a clear error with both unknown and missing keys - **Use `ref_flow` for piecewise attribution** — `_pw_converter_first_flow` now uses the stored reference flow instead of arbitrary insertion-order `pair_flow` - **Clarify build-once constraint comments** — documents that prior counts as the build (rebuild after lifetime expiry tracked in #102) - **Fix `test_component_status_min_uptime`** — old test was non-binding (constant demand, no alternative); now verifies component status variable integration - **Fix docstring** — `eta=0.7` → `eta=0.5` in varying efficiency test ## Test plan - [x] `uv run pytest tests/ -q` — 438 passed, 179 skipped, 9 xfailed - [x] `uv run mypy src/` — no issues - [x] `uv run ruff check .` — all checks passed 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
src/fluxopt/model.py (3)
1282-1289:⚠️ Potential issue | 🟠 MajorRecurring piecewise per-size costs are still charged in inactive periods.
self.pw_invest_sizeis period-independent, so this bills every period once the converter exists, even whenpw_invest_activeis 0 because the unit was built later or its lifetime expired. The fixed recurring block right below already keys offpw_invest_active, so the two recurring terms currently disagree about what an “active period” means. This needs an active-size linearization here, andsrc/fluxopt/contributions.pyshould stay aligned with the same variable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/fluxopt/model.py` around lines 1282 - 1289, The recurring piecewise per-size cost block currently multiplies eps_p by pw_invest_size (period-independent) causing charges in inactive periods; change it to use an active-size linearization by constructing an active size that combines self.pw_invest_active and self.pw_invest_size (as done in src/fluxopt/contributions.py) so costs are only applied when the unit is active in a period, e.g. assert self.pw_invest_active is not None and compute active_size = (self.pw_invest_active * self.pw_invest_size) (or the equivalent linearized representation used in contributions.py) and then replace (eps_p * self.pw_invest_size).sum('pw_converter') with (eps_p * active_size).sum('pw_converter') so periodic_direct uses the same active-period logic as the fixed recurring block.
647-649:⚠️ Potential issue | 🟠 MajorFinite-lifetime prior assets are still unrebuildable.
Both investment paths hard-stop all future builds whenever
prior_size > 0. That makes a prior asset withlifetime=1unreplaceable after it expires, even though theactive == rhs + 1equations already prevent overlapping builds while the inherited unit is still active.🛠️ Narrow the no-build shortcut to non-expiring priors
- if has_prior: + if has_prior and lt_int is None: self.m.add_constraints(build.sel(flow=fid).sum('period') == 0, name=f'invest_no_build_prior_{fid}')- if has_prior: + if has_prior and lt_int is None: self.m.add_constraints( build.sel(pw_converter=cid).sum('period') == 0, name=f'pw_invest_no_build_prior_{cid}' )Also applies to: 765-769
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/fluxopt/model.py` around lines 647 - 649, The current no-build shortcut uses has_prior to force zero future builds (self.m.add_constraints(build.sel(flow=fid).sum('period') == 0...)) which incorrectly forbids replacing finite-lifetime priors; change the guard so this constraint is only added for non-expiring priors (e.g., when prior_lifetime is infinite or prior_lifetime >= planning_horizon / flagged as non-expiring) rather than any prior_size>0. Update the conditional around the constraint in the build/investment blocks (the place using has_prior and the constraint named invest_no_build_prior_{fid}) and make the same change for the second occurrence referenced in the review (the block around lines 765-769) so finite-lifetime priors remain replaceable once expired while keeping the existing active == rhs + 1 overlap protection.
889-916:⚠️ Potential issue | 🟠 MajorComponent status still skips governed flows without a scalar
ds.size.This method only gates flows when
ds.sizeis a finite scalar. Governed flows that run throughflow--size, investments, or implicit capacities fall through here, while_constrain_flow_rates_plain()already excluded them, socomponent--oncan be 0 and those flows can still dispatch through the sizing/storage path. Derive a size expression for the non-scalar cases instead of skipping them.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/fluxopt/model.py` around lines 889 - 916, The component-status gating currently only applies when ds.size is a finite scalar and skips flows whose size is a variable/derived expression (investment, flow--size, implicit capacity). In _constrain_flow_rates_component_status, replace the scalar-only check with a size expression: for each fid, if ds.size is not a finite scalar, derive size_expr from the model's size/capacity variable for that flow (e.g. the investment/flow-size/implicit-capacity expression available on the model such as a flow size variable or capacity accessor used elsewhere), then use size_expr in the constraints (fr >= size_expr * rl * on and fr <= size_expr * ru * on, or fr == size_expr * fp * on) so component_on properly gates governed flows even when size is non-scalar; keep the existing names flow_lb_cstatus_{fid}, flow_ub_cstatus_{fid}, flow_fix_cstatus_{fid}.tests/math_port/test_components.py (1)
58-99:⚠️ Potential issue | 🟠 Major
test_component_status_min_uptimeis still non-binding.Constant demand plus only a waste sink forces the boiler on for all six timesteps even if
min_uptimeis broken, so this won't catch regressions in the duration tracking. Make shutdown feasible with intermittent demand and/or a backup heat source, then assert the contiguous on-block length instead of total on-hours.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/math_port/test_components.py` around lines 58 - 99, The test test_component_status_min_uptime is non-binding because constant demand forces the Boiler on; change the scenario to allow the Boiler to shut down so min_uptime is exercised: make the demand intermittent (e.g., a profile with zeros in some timesteps) and add a feasible backup heat source or sink (e.g., a second Converter or a Port that can supply Heat with higher cost) so the optimizer can choose to turn the Boiler off. Keep the existing Converter('Boiler') and Status(min_uptime=3) but replace the total_on assertion with a check that computes the longest contiguous run of result.solution['component--on'].sel(component='Boiler').values and assert that the longest on-block length is >= 3 (and that startup/shutdown variables exist and show at least one transition).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/fluxopt/components.py`:
- Around line 49-52: Update the Piecewise doc/validation mismatch by tightening
Piecewise.__post_init__: in addition to rejecting Sizing/Investment wrappers,
also detect and reject any flow with a scalar Flow.size (i.e., if getattr(flow,
"size", None) is not None) and raise a clear ValueError referencing the
curve-level sizing requirement; keep the existing checks for Sizing and
Investment classes. This ensures the Piecewise class enforces the docstring rule
that per-flow sizes are forbidden.
---
Duplicate comments:
In `@src/fluxopt/model.py`:
- Around line 1282-1289: The recurring piecewise per-size cost block currently
multiplies eps_p by pw_invest_size (period-independent) causing charges in
inactive periods; change it to use an active-size linearization by constructing
an active size that combines self.pw_invest_active and self.pw_invest_size (as
done in src/fluxopt/contributions.py) so costs are only applied when the unit is
active in a period, e.g. assert self.pw_invest_active is not None and compute
active_size = (self.pw_invest_active * self.pw_invest_size) (or the equivalent
linearized representation used in contributions.py) and then replace (eps_p *
self.pw_invest_size).sum('pw_converter') with (eps_p *
active_size).sum('pw_converter') so periodic_direct uses the same active-period
logic as the fixed recurring block.
- Around line 647-649: The current no-build shortcut uses has_prior to force
zero future builds (self.m.add_constraints(build.sel(flow=fid).sum('period') ==
0...)) which incorrectly forbids replacing finite-lifetime priors; change the
guard so this constraint is only added for non-expiring priors (e.g., when
prior_lifetime is infinite or prior_lifetime >= planning_horizon / flagged as
non-expiring) rather than any prior_size>0. Update the conditional around the
constraint in the build/investment blocks (the place using has_prior and the
constraint named invest_no_build_prior_{fid}) and make the same change for the
second occurrence referenced in the review (the block around lines 765-769) so
finite-lifetime priors remain replaceable once expired while keeping the
existing active == rhs + 1 overlap protection.
- Around line 889-916: The component-status gating currently only applies when
ds.size is a finite scalar and skips flows whose size is a variable/derived
expression (investment, flow--size, implicit capacity). In
_constrain_flow_rates_component_status, replace the scalar-only check with a
size expression: for each fid, if ds.size is not a finite scalar, derive
size_expr from the model's size/capacity variable for that flow (e.g. the
investment/flow-size/implicit-capacity expression available on the model such as
a flow size variable or capacity accessor used elsewhere), then use size_expr in
the constraints (fr >= size_expr * rl * on and fr <= size_expr * ru * on, or fr
== size_expr * fp * on) so component_on properly gates governed flows even when
size is non-scalar; keep the existing names flow_lb_cstatus_{fid},
flow_ub_cstatus_{fid}, flow_fix_cstatus_{fid}.
In `@tests/math_port/test_components.py`:
- Around line 58-99: The test test_component_status_min_uptime is non-binding
because constant demand forces the Boiler on; change the scenario to allow the
Boiler to shut down so min_uptime is exercised: make the demand intermittent
(e.g., a profile with zeros in some timesteps) and add a feasible backup heat
source or sink (e.g., a second Converter or a Port that can supply Heat with
higher cost) so the optimizer can choose to turn the Boiler off. Keep the
existing Converter('Boiler') and Status(min_uptime=3) but replace the total_on
assertion with a check that computes the longest contiguous run of
result.solution['component--on'].sel(component='Boiler').values and assert that
the longest on-block length is >= 3 (and that startup/shutdown variables exist
and show at least one transition).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5d4e9b2c-cbab-4bc7-aa38-755a1b0d594c
📒 Files selected for processing (7)
src/fluxopt/components.pysrc/fluxopt/contributions.pysrc/fluxopt/model.pysrc/fluxopt/stats.pytests/math_port/test_components.pytests/math_port/test_multi_period.pytests/math_port/test_piecewise.py
| **Piecewise**: set ``conversion`` to a :class:`ConversionCurve`. | ||
| Breakpoint keys must match flow ``short_id`` values. Individual flows | ||
| must not carry ``size`` or ``status`` (the curve governs sizing/status | ||
| at the component level). |
There was a problem hiding this comment.
Docstring overstates the size restriction for piecewise flows.
The new public doc says individual flows must not carry size, but __post_init__ only rejects Sizing/Investment-style wrappers and the added piecewise tests still use fixed Flow.size values. Please either tighten the validation to forbid scalar sizes too, or document that fixed per-flow bounds are still supported.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/fluxopt/components.py` around lines 49 - 52, Update the Piecewise
doc/validation mismatch by tightening Piecewise.__post_init__: in addition to
rejecting Sizing/Investment wrappers, also detect and reject any flow with a
scalar Flow.size (i.e., if getattr(flow, "size", None) is not None) and raise a
clear ValueError referencing the curve-level sizing requirement; keep the
existing checks for Sizing and Investment classes. This ensures the Piecewise
class enforces the docstring rule that per-flow sizes are forbidden.
## Summary - **Rename** `Effect.contribution_from` → `Effect.cross_periodic` and `Effect.contribution_from_per_hour` → `Effect.cross_temporal` - **Add** `Effect.cross_once` for one-time investment cross-effects (e.g., embodied CO2 → carbon cost) - **Add** `cf_once` Leontief pass in both solver (`model.py`) and attribution (`contributions.py`) - **Update** all tests and documentation Closes #105. **BREAKING CHANGE**: `contribution_from` and `contribution_from_per_hour` removed. The old `contribution_from` set both periodic and temporal matrices; the new API requires explicit per-domain fields. ## Test plan - [x] `uv run pytest tests/ -q` — 442 passed, 179 skipped, 11 xfailed - [x] `uv run mypy src/` — no issues - [x] `uv run ruff check .` — all checks passed - [x] New test: `test_cross_once_prices_embodied_emissions` — verifies Investment CO2 priced into cost via `cross_once` - [x] Existing cross-effect tests updated and passing 🤖 Generated with [Claude Code](https://claude.com/claude-code) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## Release Notes * **New Features** * Restructured cross-effect API with domain-specific fields: periodic, temporal, and one-time contributions are now separated for clearer effect modeling and relationships. * **Documentation** * Updated comprehensive guides and mathematical notation to reflect the new cross-effect field organization and per-domain semantics. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (5)
src/fluxopt/model_data.py (1)
1596-1604:⚠️ Potential issue | 🟠 MajorValidate
PiecewiseInvestment.prior_sizeagainst implicit curve sizeIf
prior_size < max_bp, the model can dispatch with the full curve envelope while carrying/paying a smaller prior capacity.🛠️ Proposed fix
if isinstance(curve.size, PiecewiseInvestment): pi = curve.size + if pi.prior_size > 0 and not np.isclose(pi.prior_size, max_bp): + msg = ( + f'PiecewiseInvestment on {conv.id!r}: prior_size must be 0 or ' + f'equal to implicit curve size {max_bp}, got {pi.prior_size}' + ) + raise ValueError(msg) synth_inv = Investment( min_size=max_bp, max_size=max_bp, mandatory=pi.mandatory,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/fluxopt/model_data.py` around lines 1596 - 1604, When constructing the synthetic Investment for a PiecewiseInvestment (detect via isinstance(curve.size, PiecewiseInvestment)), ensure PiecewiseInvestment.prior_size is validated against max_bp to prevent an undersized prior capacity allowing full-envelope dispatch: either clamp prior_size to be at least max_bp (set synth_inv.prior_size = max(max_bp, pi.prior_size)) or raise a clear validation error if pi.prior_size < max_bp; update the synth_inv creation (the Investment instantiation where prior_size=pi.prior_size) to enforce this check so the model cannot carry/pay a smaller prior than max_bp.src/fluxopt/contributions.py (1)
118-124:⚠️ Potential issue | 🟠 MajorHandle missing
ref_flowbefore string conversionMissing
ref_flowvalues become truthy strings (e.g.,"nan"), so fallback topair_flowis skipped and costs can be silently un-attributed.🛠️ Proposed fix
def _pw_converter_first_flow(data: ModelData, conv_id: str) -> str | None: """Find the reference flow id for a piecewise converter.""" pw = data.piecewise assert pw is not None - ref = pw.ref_flow.sel(pw_converter=conv_id) - ref_val = str(ref.values) - if ref_val: - return ref_val + ref_val = pw.ref_flow.sel(pw_converter=conv_id).values.item() + if isinstance(ref_val, str) and ref_val: + return ref_val + if ref_val is not None and not (isinstance(ref_val, float) and np.isnan(ref_val)): + return str(ref_val) # Fallback to first pair_flow if ref_flow is missing mask = pw.pair_converter.values == conv_id return str(pw.pair_flow.values[np.where(mask)[0][0]]) if mask.any() else None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/fluxopt/contributions.py` around lines 118 - 124, ref_flow values are being converted to strings (ref_val = str(ref.values)) which turns missing values like NaN into truthy strings and prevents falling back to pair_flow; update the logic in the pw.ref_flow.sel(pw_converter=conv_id) handling to detect missing values before stringifying (e.g., use pandas.isna / numpy.isnan on ref.values or check ref.isnull()) and only return str(ref.values) when the value is present, otherwise proceed to the existing mask-based fallback using pw.pair_converter and pw.pair_flow.src/fluxopt/model.py (3)
765-769:⚠️ Potential issue | 🟠 MajorPrior capacity still blocks rebuilds after finite lifetime
For
has_priorconverters, build is forced to zero in all periods, so expired prior assets cannot be replaced.🛠️ Proposed fix
- if has_prior: + if has_prior and lt_int is None: self.m.add_constraints( build.sel(pw_converter=cid).sum('period') == 0, name=f'pw_invest_no_build_prior_{cid}' )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/fluxopt/model.py` around lines 765 - 769, The current constraint forces build.sel(pw_converter=cid).sum('period') == 0 for any converter with has_prior, which blocks replacement after the prior asset expires; change the constraint to only forbid builds in periods while the prior still exists (e.g., periods <= the prior asset lifetime) instead of all periods. Locate the code that adds the constraint (the block referencing has_prior and build.sel(pw_converter=cid)) and replace the unconditional sum constraint with a constraint that zeroes build only for the set/slice of periods covered by the prior (use the model's prior lifetime parameter/array—e.g., prior_lifetime/prior_years or the converter lifetime variable—when selecting periods), allowing normal build variables for periods after expiration. Ensure you reference the same symbols build, cid, and has_prior when implementing the period-limited constraint.
1282-1289:⚠️ Potential issue | 🟠 MajorRecurring piecewise per-size costs are charged in inactive periods
invest_effects_per_size_periodicis multiplied byself.pw_invest_size(period-independent), so periods withpw_invest_active == 0can still be billed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/fluxopt/model.py` around lines 1282 - 1289, The code charges recurring piecewise per-size costs for all periods because invest_effects_per_size_periodic (eps_p) is multiplied by the size vector pw_invest_size without masking inactive periods; update the calculation in the block handling pw.invest_effects_per_size_periodic so you multiply eps_p * self.pw_invest_size by the period activity mask self.pw_invest_active (broadcasting/aligned over the time/period dimension) before summing into periodic_direct (e.g. use (eps_p * self.pw_invest_size * self.pw_invest_active).sum('pw_converter') or equivalent alignment) to ensure costs are only applied when pw_invest_active is nonzero.
399-404:⚠️ Potential issue | 🟠 MajorComponent status is not enforcing off-state for non-scalar-sized governed flows
Governed flows with
Sizing/Investment(ds.sizeis NaN) still get ungated sizing constraints and skipcomponent_ongating, so a component can be “off” while those flows dispatch.🛠️ Proposed fix
def _constrain_flow_rates_sizing(self) -> None: @@ - status_ids = self._status_flow_ids() + status_ids = self._status_flow_ids() + cstatus_ids = self._component_status_flow_ids() @@ - invest_ids = [fid for fid in all_invest_ids if fid not in status_ids] + invest_ids = [fid for fid in all_invest_ids if fid not in status_ids and fid not in cstatus_ids]def _constrain_flow_rates_component_status(self) -> None: @@ - size = float(ds.size.sel(flow=fid).values) + size_raw = float(ds.size.sel(flow=fid).values) + if np.isnan(size_raw): + if self.flow_size is None or fid not in set(self.flow_size.coords['flow'].values): + continue + size_expr = self.flow_size.sel(flow=fid) + else: + size_expr = size_raw @@ - self.m.add_constraints(fr >= size * rl * on, name=f'flow_lb_cstatus_{fid}') - self.m.add_constraints(fr <= size * ru * on, name=f'flow_ub_cstatus_{fid}') + self.m.add_constraints(fr >= size_expr * rl * on, name=f'flow_lb_cstatus_{fid}') + self.m.add_constraints(fr <= size_expr * ru * on, name=f'flow_ub_cstatus_{fid}') @@ - self.m.add_constraints(fr == size * fp * on, name=f'flow_fix_cstatus_{fid}') + self.m.add_constraints(fr == size_expr * fp * on, name=f'flow_fix_cstatus_{fid}')Also applies to: 899-916
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/fluxopt/model.py` around lines 399 - 404, The current selection of invest_ids includes flows with NaN sizes (Sizing/Investment) which bypass component_on gating and get ungated sizing constraints; update the invest_ids computation (after using self.flow_size and status_ids) to filter out flows whose size is NaN/undefined so only scalar-sized flows are included. Use the existing symbols (self.flow_size, self.data.flows / ds, and _status_flow_ids) to detect and exclude NaN-sized flows (e.g., check the size value for each fid and skip if NaN) so component_on gating and sizing constraints only apply to scalar-sized governed flows.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/fluxopt/model_data.py`:
- Around line 406-507: The build method of _ComponentStatusArrays currently
appends s.min_uptime/s.max_uptime and s.min_downtime/s.max_downtime without
validation; add the same duration checks as in _StatusArrays: for each item in
build (use comp_id and s), if any of min_uptime/min_downtime or
max_uptime/max_downtime is not None ensure they are non-negative and that max >=
min (raise ValueError if violated, including comp_id and which field); perform
these checks before appending values to min_ups, max_ups, min_downs, max_downs
so invalid component status durations are rejected early.
---
Duplicate comments:
In `@src/fluxopt/contributions.py`:
- Around line 118-124: ref_flow values are being converted to strings (ref_val =
str(ref.values)) which turns missing values like NaN into truthy strings and
prevents falling back to pair_flow; update the logic in the
pw.ref_flow.sel(pw_converter=conv_id) handling to detect missing values before
stringifying (e.g., use pandas.isna / numpy.isnan on ref.values or check
ref.isnull()) and only return str(ref.values) when the value is present,
otherwise proceed to the existing mask-based fallback using pw.pair_converter
and pw.pair_flow.
In `@src/fluxopt/model_data.py`:
- Around line 1596-1604: When constructing the synthetic Investment for a
PiecewiseInvestment (detect via isinstance(curve.size, PiecewiseInvestment)),
ensure PiecewiseInvestment.prior_size is validated against max_bp to prevent an
undersized prior capacity allowing full-envelope dispatch: either clamp
prior_size to be at least max_bp (set synth_inv.prior_size = max(max_bp,
pi.prior_size)) or raise a clear validation error if pi.prior_size < max_bp;
update the synth_inv creation (the Investment instantiation where
prior_size=pi.prior_size) to enforce this check so the model cannot carry/pay a
smaller prior than max_bp.
In `@src/fluxopt/model.py`:
- Around line 765-769: The current constraint forces
build.sel(pw_converter=cid).sum('period') == 0 for any converter with has_prior,
which blocks replacement after the prior asset expires; change the constraint to
only forbid builds in periods while the prior still exists (e.g., periods <= the
prior asset lifetime) instead of all periods. Locate the code that adds the
constraint (the block referencing has_prior and build.sel(pw_converter=cid)) and
replace the unconditional sum constraint with a constraint that zeroes build
only for the set/slice of periods covered by the prior (use the model's prior
lifetime parameter/array—e.g., prior_lifetime/prior_years or the converter
lifetime variable—when selecting periods), allowing normal build variables for
periods after expiration. Ensure you reference the same symbols build, cid, and
has_prior when implementing the period-limited constraint.
- Around line 1282-1289: The code charges recurring piecewise per-size costs for
all periods because invest_effects_per_size_periodic (eps_p) is multiplied by
the size vector pw_invest_size without masking inactive periods; update the
calculation in the block handling pw.invest_effects_per_size_periodic so you
multiply eps_p * self.pw_invest_size by the period activity mask
self.pw_invest_active (broadcasting/aligned over the time/period dimension)
before summing into periodic_direct (e.g. use (eps_p * self.pw_invest_size *
self.pw_invest_active).sum('pw_converter') or equivalent alignment) to ensure
costs are only applied when pw_invest_active is nonzero.
- Around line 399-404: The current selection of invest_ids includes flows with
NaN sizes (Sizing/Investment) which bypass component_on gating and get ungated
sizing constraints; update the invest_ids computation (after using
self.flow_size and status_ids) to filter out flows whose size is NaN/undefined
so only scalar-sized flows are included. Use the existing symbols
(self.flow_size, self.data.flows / ds, and _status_flow_ids) to detect and
exclude NaN-sized flows (e.g., check the size value for each fid and skip if
NaN) so component_on gating and sizing constraints only apply to scalar-sized
governed flows.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6854e43b-16b2-43de-8a2a-fa5e956ef727
📒 Files selected for processing (12)
docs/guide/effects.mddocs/math/effects.mddocs/math/notation.mdsrc/fluxopt/contributions.pysrc/fluxopt/elements.pysrc/fluxopt/model.pysrc/fluxopt/model_data.pytests/math/test_contributions.pytests/math/test_effects.pytests/math_port/test_effects.pytests/math_port/test_multi_period.pytests/test_io.py
| @dataclass | ||
| class _ComponentStatusArrays: | ||
| """Component-level status arrays — parallel to _StatusArrays but for components. | ||
|
|
||
| Each component (storage or converter) with a Status creates one entry. | ||
| The ``governed_flows`` mapping records which flow ids each component governs. | ||
| """ | ||
|
|
||
| min_uptime: xr.DataArray | None = None # (cstatus_component,) | ||
| max_uptime: xr.DataArray | None = None | ||
| min_downtime: xr.DataArray | None = None | ||
| max_downtime: xr.DataArray | None = None | ||
| initial: xr.DataArray | None = None # (cstatus_component,) — NaN = free | ||
| effects_running: xr.DataArray | None = None # (cstatus_component, effect, time, period?) | ||
| effects_startup: xr.DataArray | None = None # (cstatus_component, effect, time, period?) | ||
| previous_uptime: xr.DataArray | None = None | ||
| previous_downtime: xr.DataArray | None = None | ||
| governed_flows: xr.DataArray | None = None # (cstatus_component, cstatus_flow_idx) — str flow ids | ||
|
|
||
| @classmethod | ||
| def build( | ||
| cls, | ||
| items: list[tuple[str, Status, list[str]]], | ||
| effect_ids: list[str], | ||
| time: TimeIndex, | ||
| dim: str = 'cstatus_component', | ||
| period: pd.Index | None = None, | ||
| ) -> _ComponentStatusArrays: | ||
| """Build component status arrays. | ||
|
|
||
| Args: | ||
| items: Triples of (component_id, Status, governed_flow_ids). | ||
| effect_ids: Known effect ids. | ||
| time: Time index. | ||
| dim: Dimension name. | ||
| period: Period index. | ||
| """ | ||
| if not items: | ||
| return cls() | ||
|
|
||
| effect_set = set(effect_ids) | ||
| tmpl = _effect_template({'effect': effect_ids, 'time': time}, period) | ||
|
|
||
| ids: list[str] = [] | ||
| min_ups: list[float] = [] | ||
| max_ups: list[float] = [] | ||
| min_downs: list[float] = [] | ||
| max_downs: list[float] = [] | ||
| initials: list[float] = [] | ||
| er_slices: list[xr.DataArray] = [] | ||
| es_slices: list[xr.DataArray] = [] | ||
| gov_lists: list[list[str]] = [] | ||
|
|
||
| for comp_id, s, flow_ids in items: | ||
| ids.append(comp_id) | ||
| min_ups.append(s.min_uptime if s.min_uptime is not None else np.nan) | ||
| max_ups.append(s.max_uptime if s.max_uptime is not None else np.nan) | ||
| min_downs.append(s.min_downtime if s.min_downtime is not None else np.nan) | ||
| max_downs.append(s.max_downtime if s.max_downtime is not None else np.nan) | ||
| initials.append(np.nan) # component-level has no prior_rates | ||
| gov_lists.append(flow_ids) | ||
|
|
||
| er = tmpl.zeros() | ||
| for ek, ev in s.effects_per_running_hour.items(): | ||
| if ek not in effect_set: | ||
| raise ValueError(f'Unknown effect {ek!r} in Status.effects_per_running_hour on {comp_id!r}') | ||
| er.loc[ek] = as_dataarray(ev, tmpl.as_da_coords) | ||
| er_slices.append(er) | ||
|
|
||
| es = tmpl.zeros() | ||
| for ek, ev in s.effects_per_startup.items(): | ||
| if ek not in effect_set: | ||
| raise ValueError(f'Unknown effect {ek!r} in Status.effects_per_startup on {comp_id!r}') | ||
| es.loc[ek] = as_dataarray(ev, tmpl.as_da_coords) | ||
| es_slices.append(es) | ||
|
|
||
| coords = {dim: ids} | ||
| idx = pd.Index(ids, name=dim) | ||
|
|
||
| # Build governed_flows as 2D string array (component, flow_idx) — pad with '' | ||
| max_flows = max(len(fl) for fl in gov_lists) if gov_lists else 0 | ||
| gov_data = np.full((len(ids), max_flows), '', dtype=object) | ||
| for i, fl in enumerate(gov_lists): | ||
| for j, fid in enumerate(fl): | ||
| gov_data[i, j] = fid | ||
| gov_da = xr.DataArray( | ||
| gov_data, | ||
| dims=[dim, 'cstatus_flow_idx'], | ||
| coords={dim: ids, 'cstatus_flow_idx': list(range(max_flows))}, | ||
| ) | ||
|
|
||
| return cls( | ||
| min_uptime=xr.DataArray(np.array(min_ups), dims=[dim], coords=coords), | ||
| max_uptime=xr.DataArray(np.array(max_ups), dims=[dim], coords=coords), | ||
| min_downtime=xr.DataArray(np.array(min_downs), dims=[dim], coords=coords), | ||
| max_downtime=xr.DataArray(np.array(max_downs), dims=[dim], coords=coords), | ||
| initial=xr.DataArray(np.array(initials), dims=[dim], coords=coords), | ||
| effects_running=fast_concat(er_slices, idx), | ||
| effects_startup=fast_concat(es_slices, idx), | ||
| governed_flows=gov_da, | ||
| ) | ||
|
|
There was a problem hiding this comment.
_ComponentStatusArrays is missing duration validation
Unlike _StatusArrays, component status accepts negative durations and max < min pairs without checks, which can create invalid status constraints.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/fluxopt/model_data.py` around lines 406 - 507, The build method of
_ComponentStatusArrays currently appends s.min_uptime/s.max_uptime and
s.min_downtime/s.max_downtime without validation; add the same duration checks
as in _StatusArrays: for each item in build (use comp_id and s), if any of
min_uptime/min_downtime or max_uptime/max_downtime is not None ensure they are
non-negative and that max >= min (raise ValueError if violated, including
comp_id and which field); perform these checks before appending values to
min_ups, max_ups, min_downs, max_downs so invalid component status durations are
rejected early.
Summary
piecewise()API (Piecewise conversion (part-load curves) #25)StorageandConversionCurvefor on/off behavior at component scope (Component-level status on Converter/Port #23)Test plan
uv run pytest tests/ -q— 429 passed, 188 skipped, 9 xfaileduv run mypy src/— no issuesuv run ruff check .— all checks passed🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Enhancements
Tests
Documentation