-
Notifications
You must be signed in to change notification settings - Fork 245
Description
Version and Platform (required):
- Binary Ninja Version: 4.2.6304-dev (65f49c87)
- OS: macOS
- OS Version: 15.0
- CPU Architecture: M1
Bug Description:
I am finding that when analysing dyld shared cache memory consumption becomes excessive relatively quickly. This is both the case in headless and when using the UI. Simply loading multiple images (2, maybe 3), one after the other, results in 10s of GBs of RAM being consumed by Binary Ninja. I've even had macOS run out of RAM because BN was using over 50GBs.
If the database is saved and then Binary Ninja is fully closed and the database re-opened, the amount of RAM usage is at more expected levels (couple of GBs, depending on how many DSC images are loaded).
Steps To Reproduce:
- Open a copy of DSC in Binary Ninja
- Load an image and wait for analysis to complete (may not be required but things are more stable this way)
- Repeat step 2 multiple times (load at least 2 images)
- Observe high RAM usage
- Save the database
- Close and re-open Binary Ninja
- Re-open the database
- Observe significantly less RAM usage
Binary:
Extract the DSC from any iOS 18+ IPSW. Not sure if its an iOS 18 related issues, its just the ones I've been testing on.
Activity
WeiN76LQh commentedon Nov 30, 2024
Using this highly modified fork of the DSC plugin, memory usage by the plugin is at acceptable levels. However there's still substantial memory usage from Binary Ninja which is roughly equating to ~0.5GB of RAM usage per image loaded in DSC. Sampling the memory usage its almost entirely in binary ninja core. This is the case when loading a Binary Ninja database of DSC with 10s of images loaded.
Essentially there is a part of Binary Ninja that does not scale well due to excessive RAM usage which is problematic for container formats such as DSC. I can't diagnose whats causing it because I don't have access to the source or a symbolised copy of the binary ninja core library. However its pretty easy to re-produce; just use the aforementioned fork of the DSC plugin, load a number of libraries and let analysis finish, then sample memory usage using the builtin OS tools for that. In my case on macOS I used
malloc_history
andheap
, which seemed sufficient for the job.psifertex commentedon May 21, 2025
We've made some changes to the type system that should improve memory utilization. Do you mind re-trying and seeing what the results are in dev >= 7473?
WeiN76LQh commentedon May 22, 2025
I'm doing some larger tests but so far with 20 libraries loaded I'm seeing about 10% less memory usage between 5.1.7472-dev (10ef9421) and 5.1.7484-dev (83816ded)
bdash commentedon May 22, 2025
You may want to wait for a newer build and test again. Some of the shared cache analysis is bypassed until 397b569 makes it into a build.
WeiN76LQh commentedon May 22, 2025
I'll try again once that change makes it into a build but I'm not expecting it to make much of a difference.
@psifertex was 10% roughly the expected improvement?
fuzyll commentedon May 22, 2025
That sounds pretty accurate. There's another bug we're tracking internally (Vector35/binaryninja#1041) where we need to refactor the Triage View a bit. To alleviate some additional memory pressure and get some additional performance gain, you might also try disabling the Triage View.
WeiN76LQh commentedon May 23, 2025
5.1.7489-dev (24098367) with triage view disabled saw a further 10% reduction in memory usage
First number is the amount of RAM in use by Binary Ninja (in GBs) after auto analysis has completed. The 2nd is RAM usage after saving the database, restarting Binary Ninja and opening the database. Values are taken from Acitivity Monitor.