Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Excessive memory usage when campaign has lots of maps #4786

Open
kwvanderlinde opened this issue May 21, 2024 · 8 comments
Open

[Bug]: Excessive memory usage when campaign has lots of maps #4786

kwvanderlinde opened this issue May 21, 2024 · 8 comments
Labels

Comments

@kwvanderlinde
Copy link
Collaborator

Describe the Bug

I have a campaign that is ~200 MiB and just shy of 40 maps. If I click switch to each of the maps, in the end I've got a heap ~4.1 GiB in size. This number also stays consistently high - if I'm lucky it drops back down to ~3.5 GiB but never further.

To Reproduce

  1. Open up your favourite campaign.
  2. Switch to each map in turn
  3. Watch the memory usage steadily increase and stay high.

Expected Behaviour

Memory for zones not recently visited should be reclaimed at some point in time.

Screenshots

No response

MapTool Info

1.13.2

Desktop

Linux Mint 21.3

Additional Context

No response

@kwvanderlinde
Copy link
Collaborator Author

Just so it's clear, I'm not running out of memory or anything. But I see this sort of pattern as a red flag from a user perspective, especially since we proudly show off our memory usage in the status bar.


I've taken a quick look at a memory profiler and two big things showed up.

The first is my very own BufferedImagePool. One is created for each zone, and it holds strong references to the pooled BufferedImage. So this consumes memory proportional to the number of maps ever rendered and the resolution. And unless some invalidation conditions occur, none of them ever allows their memory to be reclaimed.

The second thing that shows up is ImageManager. It holds onto soft references to any images previously loaded. Even when we flush our hard references, we still keep the soft references around as a cache. If there's no lack of memory, these will never be collected. On the other hand, if there is not much memory available these images should be collected.

The BufferedImagePool really needs to be fixed since it can lead to OOM. For optics, I think the ImageManager should drop very old references, though a user could also "solve" that by adding a -Xmx to their configuration.

@FullBleed
Copy link

FullBleed commented May 21, 2024

Maybe unrelated... but when writing a lot (of separate entries) to table the memory explodes to very high levels and never gives the memory back. @Azhrei is working on some table improvements that may help with that initial gobbling of the memory during such actions, but the actual behaviour of the memory not being "released" after doing that seems suspect.

@kwvanderlinde
Copy link
Collaborator Author

Maybe unrelated... but when writing a lot (of separate entries) to table the memory explodes to very high levels and never gives the memory back. @Azhrei is working on some table improvements that may help with that initial gobbling of the memory during such actions, but the actual behaviour of the memory not being "released" after doing that seems suspect.

Interesting to hear that the memory is not given back given my quick glance at that issue. Two questions:

  1. Do your tables have images in the entries?
  2. Have you tried clicking the memory status bar? Maybe double clicking, I can't remember. That should clean up any memory we're not explicitly holding onto.

If you've done (2) and memory usage is still high, that's definitely something we should look into.

@emmebi
Copy link
Collaborator

emmebi commented May 22, 2024

@cwisniew
Copy link
Member

@kwvanderlinde I assume you have noticed the asset manager is holding on to assets too long in its cache?

If so maybe we should move to using a proper caching library rather than try roll our own, good options I know of are
Ehcache - https://www.ehcache.org/
caffeine - https://github.com/ben-manes/caffeine/wiki

Although I am certainly not wedded to either one of those two if you have another option in mind

@kwvanderlinde
Copy link
Collaborator Author

@kwvanderlinde I assume you have noticed the asset manager is holding on to assets too long in its cache?

... yes ... but it's also just kind of strange what gets loaded when, and when we decide to work from in-memory vs file. Also I measured the contribution from the asset manager originally thinking it would be the main contributor, but in my case it only accounted for ~300 MiB of memory. Still worth fixing of course.

If so maybe we should move to using a proper caching library rather than try roll our own, good options I know of are Ehcache - ehcache.org caffeine - ben-manes/caffeine/wiki

Although I am certainly not wedded to either one of those two if you have another option in mind

I don't have anything in mind as I'm not very familiar with the Java libraries landscape. Both those look pretty good, with the key thing in my mind being their expiry support.

@cwisniew
Copy link
Member

@kwvanderlinde I assume you have noticed the asset manager is holding on to assets too long in its cache?

... yes ... but it's also just kind of strange what gets loaded when, and when we decide to work from in-memory vs file. Also I measured the contribution from the asset manager originally thinking it would be the main contributor, but in my case it only accounted for ~300 MiB of memory. Still worth fixing of course.

Thats 300MiB of references to other objects :)

@FullBleed
Copy link

Maybe unrelated... but when writing a lot (of separate entries) to table the memory explodes to very high levels and never gives the memory back. @Azhrei is working on some table improvements that may help with that initial gobbling of the memory during such actions, but the actual behaviour of the memory not being "released" after doing that seems suspect.

Interesting to hear that the memory is not given back given my quick glance at that issue. Two questions:

1. Do your tables have images in the entries?

2. Have you tried clicking the memory status bar? Maybe double clicking, I can't remember. That should clean up any memory we're not explicitly holding onto.

If you've done (2) and memory usage is still high, that's definitely something we should look into.

No images in the table.

And I have tried double clicking the memory status. It drops a bit, but holds onto most of it. I don't have the numbers right in front of me, but if the memory usage was up around 48GB after writing 3000+ entries it might drop down to something like 28GB.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants