Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible avenues for speeding up route optimisation #280

Open
davidwilby opened this issue May 9, 2024 · 0 comments
Open

Possible avenues for speeding up route optimisation #280

davidwilby opened this issue May 9, 2024 · 0 comments

Comments

@davidwilby
Copy link

Following a discussion with @Ulvetanna & @hjabbot where we talked about various potential ways to speed up route optimisation. Just making some notes here to capture our conversation.

Initially, I've tried profiling a simple call to optimise_routes using cProfile (and visualising with snakeviz) which helps to highlight the slowest parts of a run.
(btw I learned all about these from this nice, concise profiling and optimisation course: https://rse.shef.ac.uk/pando-python/)

What are the slow bits?

As expected, compute_smoothed_routes and compute_routes are the slowest operations, with evaluating _F, _dist_around_globe and the calls to Newton optimisation being the biggest chunks of the smoothing.

image

Optimisation strategies

Using optimised alternatives

For some of the relatively standard equations/algorithms such as Newton-Raphson optimisation and Dijkstra, there are numerous available alternatives to a custom implementation (if they give the same results), such as SciPy's Newton optimisation (https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.newton.html) which I haven't tested, but am lead to believe should be faster than a custom implementation. There may well be suitable optimised Dijkstra solvers callable from python as well.

Parallelisation

There are likely some places in the code where parallelisation may be a suitable approach, particularly for embarrassingly parallel problems where e.g. no iterations of a for-loop depend on any of the other iterations.

Parallelisation of Dijktra's algorithm is apparently also possible - though implementation of this may be more complicated and efforts may be better placed elsewhere, though there may be a nice drop-in replacement available, but I haven't found one yet.

Just in time compiling

Some relatively simple functions which take up a reasonable chunk of compute time due to the number of times they are called, e.g. _F (called about 0.5 million times in each of the lat and long cases in my simple test), are suitable for just in time (JIT) compilation using packages like Numba which can be done with relatively minor changes to the code. In my initial tests, I wasn't able to get JIT working but pre-compiling showed some promising results, roughly halfing the time taken by compute_smoothed_routes.

Using faster data structures/methods

Some ways of achieving the same thing in python can be faster than others, for example when constructing a list, using .append() is slower than using list comprehension due to the way memory is allocated. See: https://rse.shef.ac.uk/pando-python/optimisation-data-structures-algorithms.html

Pre-calculation

If appropriate, instead of speeding up the code, we could pre-calculate anything that's appropriate, for instance the most likely routes.

Safety

Before following any of these routes, a robust set of tests is a good strategy to ensure that changes to the code don't modify the results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant