Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce memory usage on large N-1 analysis #922

Open
nicow-elia opened this issue Dec 17, 2024 · 1 comment
Open

Reduce memory usage on large N-1 analysis #922

nicow-elia opened this issue Dec 17, 2024 · 1 comment

Comments

@nicow-elia
Copy link

Describe the current behavior

Running a large N-1 analysis takes a comparatively large amount of memory. I used the pegase9241 IEEE dataset as an example to test speed and memory usage of a large N-1 analysis, and I found that while pandapower can perform the 9241 analysis with < 5GB memory needed, powsybl filled the RAM (300GB) and crashed with an OutOfMemory Exception

Code to replicate for pypowsybl:

import pypowsybl

net = pypowsybl.network.load("case9241pegase.mat")

analysis = pypowsybl.security.create_analysis()
analysis.add_monitored_elements(branch_ids=net.get_branches().index)
analysis.add_single_element_contingencies(elements_ids=net.get_branches().index)
res = analysis.run_dc(net)

Exception for powsybl:

Traceback (most recent call last):
  File "test.py", line 9, in <module>
    res = analysis.run_dc(net)
  File "/home/ro2585/.cache/pypoetry/virtualenvs/tennet-dc-solver-N9c8NLm1-py3.10/lib/python3.10/site-packages/pypowsybl/security/impl/security.py", line 78, in run_dc
    _pypowsybl.run_security_analysis(self._handle, network._handle, p, provider, True,
pypowsybl._pypowsybl.PyPowsyblError: java.lang.OutOfMemoryError: Garbage-collected heap size exceeded. Consider increasing the maximum Java heap size, for example with '-Xmx'

To make it even possible to run with powsybl, I have to split the outages into smaller lists and concatenate my results in the end, where it becomes a delicate balance of how small I want to make the list so the resulting computation just fits into memory. I don't see why this computation should take this much storage, after all it's some 14k monitored elements and 14k outaged elements, meaning 200M values to store. Even if you store p1, q1, i1, p2, q2, i2 (which are pretty redundant in DC) it shouldn't exceed 10 GB.

Describe the expected behavior

When I run it with pandapower, it just works and even stays below 1GB of total memory usage:

import pandapower
import pandapower.networks
from tqdm import tqdm

net = pandapower.networks.case9241pegase()

res = []
for line_idx in tqdm(net.line.index):
    was_in_service = net.line.loc[line_idx, "in_service"]
    net.line.loc[line_idx, "in_service"] = False
    try:
        pandapower.rundcpp(net)
        res.append({
            "line": net.res_line.p_from_mw,
            "trafo": net.res_trafo.p_hv_mw
        })
    finally:
        net.line.loc[line_idx, "in_service"] = was_in_service


for trafo_idx in tqdm(net.trafo.index):
    was_in_service = net.trafo.loc[trafo_idx, "in_service"]
    net.trafo.loc[trafo_idx, "in_service"] = False
    try:
        pandapower.rundcpp(net)
        res.append({
            "line": net.res_line.p_from_mw,
            "trafo": net.res_trafo.p_hv_mw
        })
    finally:
        net.trafo.loc[trafo_idx, "in_service"] = was_in_service

Describe the motivation

No response

Extra Information

test.zip

@geofjamg
Copy link
Member

This is because the DC security analysis is solving all contigencies at a time into a single factorisation. So consumed memory is propertionnal to the number of contingencies. We are aware of this issue and we have in our backlog an evolution to internally split the computation into chunks to be able to run with a fixed amount of memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants