-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improved History recording efficiency - no more exponential run-time increase possibly #275
Comments
Hi Tasnif, Thanks a lot for this contribution. Please add that to the PR it looks great! Just to make sure, this is transparent for the user (it does not change the I/O API)? G |
It only changes the History class properties History.record method, and EulerSolver.solve method. I've used the to_archive and from_archive methods to save and load datasets and it works fine. Nothing else was changed. |
great :) Please include that in the PR then ! |
Perfect. Will add over the weekend :) |
I've encountered an issue with this implementation. For some reason, when using |
Issue has been fixed. It was the |
I have a fork from the up-to-date build implementing these changes if there is still interest. I can create a pull request if you would like to see the changes and suggest improvements. Best, |
Hi @sniffo yes do open a PR please |
Hi,
Some people might be interested so I am posting here.
I noticed that increasing the length of a simulation using the
EulerSolver
caused the total run time of simulations to increase exponentially (Don't have testing data as I did not save the run times but noticed as I was tracking them, will gather some when I have time). Once I did some profiling and improved the efficiency of some geometry and gradient methods, the behavior persisted but then I found that theHistory.record
method was to blame.The method uses
pd.concat
method every time to add the datasets onto increasingly larger datasets inHistory.datasets
. This causes the method to slow down during the course of the simulation as it is loaded on to memory each time with a larger dataframe. I've changed it so thatHistory.datasets
is now a dictionary of dictionaries.History.datasets
now contains dictionaries for all elements (vert
,edge
,face
, andcell
if necessary), and each element dictionary now contains all the datasets as key value pairs with the keys being the time points and the values being the datasets at those time points.I then introduced a
History
instance methodHistory.update_datasets()
, which is run within theEulerSolver.solve()
method at the final time-point, which concatenates all of the datasets within the dictoinaries to provide theHistory.datasets
as was implemented previously. This significantly reduced the time to record and it no longer increases the tun time exponentially. I have not done extensive testing yet, but all otherHistory
methods should still work as intended.@glyg please let me know if you would like me to add these changes to my PR and any specific tests you'd like to see. I will be updating this issue with run-time tests with different numbers of time-steps.
Thanks,
Tasnif
The text was updated successfully, but these errors were encountered: