Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Holy RAM, batman! (Out of memory errors and excessively high memory usage) #3347

Open
bclothier opened this issue Sep 1, 2017 · 65 comments
Open
Labels
critical Marks a bug as a must-fix, showstopper issue technical-debt This makes development harder or is leftover from a PullRequest. Needs to be adressed at some point.
Milestone

Comments

@bclothier
Copy link
Contributor

After observing random "System resources exceeded" or "Out of memory" with my usercode which didn't quite make sense, I later ran quick fixes. After applying changes, I went to start the application and immediately got the System resources exceeded *despite have NOT run any VBA code`.

Suspicious, I restarted Access, then opened Access. Noted the working set memory at 8MB.
Launch open VBA, the working set rockets to 175MB working set memory.
Run parse, the working set goes up to 300 MB!
Open quick inspector, and refresh, the working set climbs to 475 MB!

I might be going out on a limb but it sure looks like to me something's hemorrhaging hemorrhages.

@retailcoder retailcoder added critical Marks a bug as a must-fix, showstopper issue quality-control technical-debt This makes development harder or is leftover from a PullRequest. Needs to be adressed at some point. labels Sep 2, 2017
@retailcoder
Copy link
Member

Smells like a memory leak indeed. However note that much of the performance comes from caching - trade-off is between memory consumption and performance. Not saying there's no room to consume less memory for the same performance, but RD isn't, and never was, intended to be lightweight at all.

@bclothier
Copy link
Contributor Author

@retailcoder I'm totally down with RD never ever being lightweight. I wouldn't expect it, really.

However, I think it is worthwhile to point out that I can load 2,282,482 records in an Access datasheet and only have 25-30 MB in the working set. I can do other operations such as grouping much faster. You can try it out yourself by playing w/ FactSale table from this package: http://powerpivotsdr.codeplex.com/releases/view/46355 Note: technically, that's a lie - Access merely makes it looks like it loaded that many but it in fact maintains a scrolling window, having ~100 records filling in background and partially in response to user's action; it only needs to load 2 million keys, then transverse the chain to read the whole data.

I know this is very an apple to oranges comparison because I'm comparing a C++ application against C# application but I also point out that Access is fundamentally disk-based which is an order of magnitude slower than RAM and we still see good performance nonetheless.

I am not well-qualified to offer criticism but it does not appear to me we are using database but primarily in-memory collections. That makes me think many operations we are doing are always going to be expensive because it isn't a database and has no index to reduce the search space. It might be possible that RD would benefit more from using sqlite or something similar than caching service that is entirely in memory, given the large number of objects RD needs to have and comparatively only a small subset get invalidated after some event. Having tables and indexes would allow you to quickly prune and re-populate the subsets, likely in spite of saving to a much much slower subsystem.

Just a thought.

@retailcoder
Copy link
Member

it isn't a database and has no index to reduce the search space.

Most of our lookups are hash matches in dictionary keys, an essentially O(1) operation that completes pretty much instantaneously regardless of how many thousands of keys we're looking at... And it's all in RAM indeed. An empty project with standard library references is working with ~50-60K declarations, and we cache a number of aggregates too. Each declaration is an object that contains information about its name, type, accessibility, references in user code, etc.; this metadata is fundamental for Rubberduck to understand the code.

To be honest it never even occurred to me that we could use a database and literally query the code. That's a very, very interesting idea...

@bclothier
Copy link
Contributor Author

In that case, if you do use a database, your ideal database should support hash indexes as to keep the O(1) operation. I can't recall off the cuff if sqlite has hash index. I wrongly(?) assumed that you were also doing querying because in the quick fix inspection window, we group stuff, and display only first 100. That's an example of querying where a B-tree index can help cut down on the search space because you don't want to literally loop each single member of the collection in this case. You only want a scrolling window of data.

From reading I did whenever you invalidate a module reference, you have to delete all references from that module, then re-build it. That's another example where a B-tree index would win out on a hash index -- just delete all references with module = "x", done in a sequential scan. Besides, it's much easier to optimize a database than it is code (at least that's the case for me).

@retailcoder
Copy link
Member

Oh, I was merely talking about the DeclarationFinder, which is where most queries begin; inspections use it to get all user declarations, or all byref parameters, or line labels, or variables, whatever the inspections inspect. Using a database backend would be a complete gsme changer... I'm giving this some serious thought.

@Vogel612
Copy link
Member

Vogel612 commented Sep 2, 2017

FWIW @bclothier sqlite does not maintain a hash-index. It does maintain some indexes, but it's IMO not suited to replace the mass of frequently updated information that Rubberduck needs. SQLite is simply not intended for the load of more or less completely refreshing a whole database whenever RD reparses.

@bclothier
Copy link
Contributor Author

Just to give an apple-pear comparison... I fired up VIsual Studio 2015 and loaded Rubberduck solution (what else?).

Initial loading took about ~245 MB of working set.

Building it bumps it up to ~280 MB.

Running code analysis on it adds some more up to ~315-320 MB.

This is keeping in mind that Visual Studio is much more complicated and it likely has to handle far more references & declaration to support code analysis (which might not be as sophisticated as Resharper). That said, it's safe to say it doesn't need to increase by more than 100MB just to parse or to do inspection.

Whether Visual Studio is doing this all in-memory or not, I don't know.

That said, I question whether you really need to "completely refreshing a whole database whenever RD parses." Given that I can only type so fast and that I can only change one module at a time (which remains true even if I'm using VBIDE to automate my changes), you're always going to be invalidating a subset. Even if I clicked Parse, it'd be still more logical to parse what changed unless I explicitly force a full parse (e.g. akin to Rebuild method in Visual Studio). So tossing the whole database should be, IMO, rare and explicit.

And if you are always going to work in subsets, you aren't going to get good performance from in-memory collections that can only support hash indexing; you need a range seek so you can enumerate only that subset, done.

Maybe your answer is a in-memory collection that supports B-tree indices. Maybe it's really just a boring old memory leak.I don't know. But I really don't think we need that much RAM based on what I've observed.

@retailcoder
Copy link
Member

Visual Studio is also at version 15 or so (we work with v6.0 essentially), with Roslyn being the single most efficient compiler/analyzer out there, written by an army of brains. We are not Microsoft, or JetBrains, and we're not hundreds of contributors, we're a handful, doing this part-time when it's possible. It is thoroughly unfair a comparison. VS & Roslyn are integrated, they can parse a single modified line of code. Rubberduck does what it can with the shitty VBIDE API, and parses with ANTLR, and because it's not integrated the smallest possible granular unit we can parse is an entire module - anything else and we can kiss good-bye token positions. So yes, a subset - but not half as granular as a single modified line of code.

That said I don't think we would have to ditch & recreate the whole database every time. A db could be a very interesting solution to project metadata, per-project and per-user settings, and for storage of the thousands of COM declarations, which definitely make up the bulk of the RAM we're eating up.

@bclothier
Copy link
Contributor Author

@retailcoder I apologize if I've overstepped. You're absolutely right that VS has much more resources behind it than RD does, so what RD people achieved is nothing short of phenomenal. I clearly didn't think about ANTLR vs Roslyn. I did want to get a sense of what is possible, hence my comparison. That said, I'll stop putting on big britches now.

@retailcoder
Copy link
Member

@bclothier I didn't mean to come off rude or even annoyed - at the end of the day RD is eating up a ton of RAM, and likely leaking some; memory leaks are a critical issue, and embedding an actual database isn't a completely crazy idea at all. I think I'd go with SQL Server Express though. Gotta think about how to deploy RD to the website too (that online parser in the inspections page is performing terribly ATM).

Thanks for your ideas! 😄

@retailcoder
Copy link
Member

image

@Vogel612
Copy link
Member

#3405 could be related? We should reevaluate this after a fix for that is merged

@Vogel612 Vogel612 added this to the 2.1.x Cycle milestone Nov 3, 2017
@bclothier
Copy link
Contributor Author

Just to follow up on a newer version (admittedly a dev version of 2.1.6542.12529)

Access loaded (34 MB)
VBE loaded (253 MB)
After parsing (450 MB)
Loading Code Inspector (460 MB)
After few Fix (Ignore Once) (523 MB) + keeps increasing for each subsequent Fixes

I think there's a clear memory leak with the Fix. When I got to the "Loading Code Inspector", the CI toolwindow was much more responsive than when I originally reported the issue, but it does get slower and slower each time I fix an inspection.

@bclothier
Copy link
Contributor Author

Addendum from the chat where Mat asked about whether it's quickfix or reparse --- I noticed that the same project had receded from its original peak which was 641 MB to about 500 MB. Each parse adds 20 MB, and doing them in rapid succession can keep adding more and more but after a bit enough, they do get released, which is likely the result of GC's delayed cleanup.

@bclothier
Copy link
Contributor Author

bclothier commented Nov 30, 2017

As per a suggestion in the chat, here's the test again with code explorer totally unwired from the startup process...

Access loaded (35 MB)
VBE loaded (230 MB)
After parsing (445 MB)

Seems to suggest it's not the UI but the inspections themselves or the parsing that's expensive.

Addendum: As a comparison, loading the same project but with Rubberduck disabled (there are other addins still enabled), the memory climbs from 35MB to ~115MB, suggesting that pre-parse, RD adds ~100 MB to load.

@daFreeMan
Copy link
Contributor

As this is banging around in the background, a thought occurred to me. If a change is made to use a DB in the background, many things could be stored in the DB permanently. There's no need to reparse and rebuild all the objects in EXCEL.EXE until/unless you change versions of Office. Many of the DLLs, OLBs, TLBs, etc that are referenced in a project don't change very often (if ever).

A meta-metadata tag could be kept which indicates which version is being referenced in a project, and if that version is already in the DB, it never needs to be parsed. I would presume that should speed up parse times considerably. It might increase RD ship size by including a pre-populated DB, but that, IMHO, is a minor concern, or, RD could build the library of pre-parsed references and never touch them again (until a version flag is updated).

@bclothier
Copy link
Contributor Author

Keep in mind that while we certainly could store the metadata, this means we are now trading performance for footprint. As already discussed in the first few posts of the issue, reading data off a hard drive (usually the slowest subsystem of a computer) is glacial in comparison to reading the same off a memory address within RAM. At very least, we would be able to use to load the data quickly without going through the motions of parsing the references of built-in libraries. However, it still will end up in memory if we want good performance.

We need to have a concrete description of what changes we need to make to make this better to make the discussion meaningful.

@FuSoftware
Copy link

FuSoftware commented Mar 23, 2018

Since I love idiotic suggestions, but ones that could solve problems, couldn't we use some sort of in-memory DB like SQLite ? I don't know how it would perform against a regular hashed table, but it could provide a mean to better organize the data, and then prevent leakage, and you don't have the speed limit of a regular HDD.

In case of a rebuild, just rebuild the DB. I don't know, in the case of that project, if the DB would, in the end, result in less RAM usage though.

https://www.sqlite.org/inmemorydb.html

Edit : nvm, by the time I wrote this post, from the time I had the page opened, bclothier wrote his post

@FuSoftware
Copy link

@retailcoder Oh, I somehow missed it ! Once I go back to a proper computer, I'll gladly join the project, seems quite interesting, and pretty active so far !

@bclothier
Copy link
Contributor Author

Ha! I love the suggestion. I did suggest database before but not sure if I explicitly suggested an in-memory database.

To get the best use, we would need to index the properties we use the most in the inspections and whatever. But there's also the other thing --- if you read earlier, @retailcoder points out that most of lookups are hash lookup and therefore a O(1) operation. Hard to beat that. And by introducing a database, we pay a bit more in the index maintenance. This might be very good thing for the non-user-defined, since they won't change, but not so helpful for user-defined objects.

To make it even more concrete, we should look closely at the Declaration and IdentifierReference objects --- those are what we are creating in tens of thousands for both user-defined and built-in. If we can find an access that we can substantially benefit from a B-tree index, then great! But we need to be able to describe that exactly before we can put it in practice.

@msdiniz
Copy link

msdiniz commented Sep 20, 2020

Hi @msdiniz,
if I didn't miss any relevant infos, you don't suffer on "Out of memory errors and excessively high memory usage" what this issue is about? Then please create a new issue and add log files.

Some tipps for speedup from the blog:

  • Disable “run inspections automatically on successful parse”, so that they only run if you explicitly refresh them from the Inspection Results toolwindow;
  • Set inspection severity to “Do not Show” for inspections that could produce thousands upon thousands of results, like “use meaningful names” if you’re into Hungarian Notation for example, or “use of bang operator” if that’s the only way you’re ever accessing recordset fields in Access;

Other general performance tips:

  • Rubberduck parses per-module, so when you leave a module after modifying it, trigger a parse – by the time you’re in the other module and have scrolled to where you want to be and are in that mindset, the modified module will have processed.
  • Reduce coupling: the more modules are inter-dependent, the more modifying a module requires re-resolving identifier references in the dependent modules.
  • Avoid complex grammar: bang operators, among other code constructs, are somewhat ambiguously defined and ultimately parse in two passes, with the first one failing. The standard member call syntax parses faster, in a single parser pass.

To be curious, what for do you need that amout of code for Word, Outlook?

Hi, @Imh0t3b
I though that it was implicit, sorry. I could not run RB before because of out of memory errors and excessively high memory usage; when it not crash by itself I had to kill it.
So, I uninstalled all previous versions - I think 6 or 7 different versions in almost 2 years now.
And you are right, I forgot to mention: first thing that I did after load VBE was Disable “run inspections automatically on successful parse”; I didn't dare to trigger automatic parsing. But I still using inspection severity that is producing thousands of results... I'm still trying to decide which ones I'll use; but of course it has impact on my times above described.
At least I use almost no bang operator, if any.

As for reduce coupling, see below:

To be curious, what for do you need that amount of code for Word, Outlook?

In fact I'm a physician :) and amateur programmer. More than a decade ago I managed to create a Patient Manager App for myself and a few others using Outlook Contacts () with a lot of Contact.UserProperties as a "DB" (in the .pst) for an Eletronic Patient Record - not a legal one, of course - and producing all needed Reports (prescriptions, referrals, etc) stamping my medical stamp, my signature, producing automatic text, diagnosis, bla-bla, but helping me a lot. It was almost all procedural code (!) but functioned very well. As always, new requirements from the client (me!) continuously pushed the envelope, and I started to split my God's UserForms and God's modules into smaller and smaller classes, respecting more and more SRP, but multiplying its numbers.
Of course to do that I had to become a better programmer, with weekly visits to RB blog and almost daily visits to StackOverflow site.

And I said Good news, because I suppose that I'm capable to see RB functioning now because of some fine tuning that you RB guys did, since my developer's machine setup didn't change at all in last 2 y.

@Imh0t3b
Copy link
Contributor

Imh0t3b commented Sep 20, 2020

@msdiniz,

I though that it was implicit

Don't you read the blog? Always be explicit (not only on references)! ;)

Any reason you don't switch to Office x64, as that solves all "Out Of Memory" issues. Ever tried Ms Access (only as frontend) as that is the easiest way creating form bound to data.

I'm capable to see RB functioning now because of some fine tuning that you RB guys did,

Yes they are great!! One cannot thank them enough!! Thank you @retailcoder , thank you @bclothier, thank you @Vogel612, thank you @MDoerner and thank you to all others that "Made Rubberduck" great (not again ;) )

If I had to pay for the knowledge I gained from you, my boss would be broke!

But as they don't want our money, we can pay back with our time by contributing solutions to the easier issues! Hacktober is coming soon!

@msdiniz
Copy link

msdiniz commented Sep 20, 2020

@Imh0t3b,

Don't you read the blog? Always be explicit (not only on references)! ;)

Yeah, sorry. I should applied what I've learned the hard way about to be explicit ever on references!

Any reason you don't switch to Office x64, as that solves all "Out Of Memory" issues. Ever tried Ms Access (only as frontend) as that is the easiest way creating form bound to data.

Well, I'm from a resource-limited country... I still have more than 50% of the machines with x86 because of hardware and/or licenses limitation...
And about Access: I'm no great designer... so I'm using Olk forms and regions (with some C# automation behind - many thanks Sue Mosher and Eric Lippert!) all the time... in the beginning I tried Access as BE, after seriously sucked as FE designer, but Olk UserProperties could supply all the fields that I wanted, and I gave up on Access.
Bottom line is that I'm currently stuck on .pst "BD". The way out that I envisioned is to migrate all fields to XML, currently under way; but, for that I had/have to grab all Model's logic, refactor the entire old code base... that is the reason RB App and blog (many thanks Matt's Mug!) are so important to me.

Yes they are great!! One cannot thank them enough!! Thank you @retailcoder , thank you @bclothier, thank you @Vogel612, thank you @MDoerner and thank you to all others that "Made Rubberduck" great (not again ;) )

Thanks to you all for the hard and fruitful (duckful?) work ;)

@AstrocalcVB
Copy link

With reference to my closed issue above, it's a bit disappointing to realize that my main project by all means are too big for RD, VB6 apparently running into the 32-bit 2GB limit. Nevertheless, I will try to contribute with some observations, which may or may not be of value.

When I load VB6 with my project and RB, Windows Task Manager shows a memory footprint of 86.5MB, and after clicking the Parse button it constantly runs up to end at approx. 1250MB before RB throw the out of memory exception. That may not be so useful as it simply means that we are running into the roof before the task is done.

Other observations:

Opening Code Explorer doesn't seem to consume much memory, but... if docked at the right edge of VB, if I change the size by dragging the bottom border up and down it seems to add 1-3MB of memory use each time, although there seems to be an upper limit about 10MB added. Leaving the IDE idle for some time seems release about half of that memory only. Closing the CE doesn't change that.

W/o clicking the Parse button but open RB settings, seems to about about 15 MB of memory consumption, of which nothing is given back when closing Settings. Open it again adds another 15 MB approx. and another 15 MB next time etc. Very little if any of this is given back after leaving VB6 idle for some time.

I don't know if these are signs of memory leaks or not, but thought leave my observations anyway.

@bclothier
Copy link
Contributor Author

Regarding the memory added & eventually released --- that's normal given how .NET objects are garage-collected. The ducky may be already done with but it won't actually leave the memory until the .NET runtime decides to run its garbage collection in background.

I am assuming that VB6 isn't "large address aware" in which case a typical 32-bit program would be only allowed 1 GB of memory, more or less even though there could be more memory available. Running as an add-in also brings problem because we are running in host's memory, which we may not have control. There is another discussion (#5176 ) which would move that work out of the host's memory space but that requires lot of work to make this happen.

@AstrocalcVB
Copy link

Surprisingly, I have had success! I unloaded 2 other Add-ins I had running but not really using, like the VB6 Add-in toolbar (not sure what it's doing really) and to my surprise RD was able to complete its process of both parsing and resolving references as well as running inspections. However, Win10's Task Manager shows a memory footprint of 1275.8 MB for the 'Visual Basic (32 bit) process, or when looking at the Resource Monitor VB6.EXE prints in KB
Commit: 1,359,020
Working Set: 1,419,104
Shareable: 112,632
Private: 1,306,472

Now if I understand this correctly, is "Shareable" what VB6 has left to "work with"? Hopefully, there will be enough with resources left to actually do something with RD as well. Unfortunately, I cannot load CS2013 at the same time, missing the tabbed UI in VB6 already, but for now I will have to alternate depending on work needed to be done.

Anyway, this is a "Brontosaurus" project, both in size and age, so there are probably optimizations that can be done and maybe RD can help with that. Happy I finally made it anyway.

@AstrocalcVB
Copy link

Ok even greater success and thanks to @Imh0t3b for the 22 Aug tip above on " patching msaccess.exe for LARGEADDRESSAWARE (maybe vb6.exe too)!"! I did that, I patched vb6.exe as described in that link. After that, I can now load both CodeSmart 2013 and Rubberduck, click RB Parse button and vb6 happily eats 1500+ MB while RD gets the job done. Mission accomplished!

@hecon5
Copy link

hecon5 commented Jan 24, 2022

To add to this discussion and maybe help others who wonder why when they are clearly using 64bit Office they're getting memory issues:
I use 64bit Access (Office), and was getting consistent "out of memory" errors with my file sitting around 1GB, give or take. Originally thinking that 64bit is LAA, I didn't try the fix.

But on a whim, I ran the LAA tool on VBE7.DLL, VBEUI.DLL, MSACCESS.EXE, and the RD DLLs (specifically RubberDuck.DLL)

I may run it on the rest of the RD DLLs (possibly the parsing one since that is used consistently), but just the one made a difference.

Some interesting tidbits:

  • The tool reported that all files were already LAA, and no action was taken.
  • I do NOT have Administrator access on this machine. NONE.

HOWEVER

MAGICALLY (seemingly) the memory issues poofed out of existence.

The memory use jumped to ~2.4+GB (with RD loaded; without is closer to ~100MB), but stability and speed increased dramatically, and I haven't had a memory error in 2 days. I'm going to keep monitoring this but the bottom line is that this appears to have done the trick.

@bclothier
Copy link
Contributor Author

That is totally unexpected. However, I'd also add that we should also revert the files as to prove that the memory errors re-appear and thus is not a fluke that was just coincidental to you running the LAA tool.

@hecon5
Copy link

hecon5 commented Jan 24, 2022

To confirm, you want me to revert the files (even though the tool didn't do anything)? Revert to not LAA, or just reinstall RD?

@bclothier
Copy link
Contributor Author

Either would work, yes. If it was really the LAA tool (even though it reported it did nothing) then reverting should bring back the memory errors and re-running it should then make it go away again, which would be more stronger proof that there's something afoot with the LAA thingee.

@hecon5
Copy link

hecon5 commented Jan 24, 2022

Alright, I'll give this a go and let you know; one moment, digs around in computer for files

@hecon5
Copy link

hecon5 commented Jan 24, 2022

Alright. I did it. Was not able to change the flag; got a write error, in fact.

  1. I unloaded and ran the tool to clear the value.
  2. Restarted Access. RD ran slower and parsing took a while, and memory stayed around the 1GB mark. I was not able to induce the memory error, but the memory in use stayed below 1GB; but I did notice a performance degradation.
  3. Closed Access, ran the tool to set LAA flag.
  4. Indicated no change (again).
  5. Started Access, RD parses faster.

I cannot explain it. But, it clearly did something. RD info:
Version 2.5.2.5994
OS: Microsoft Windows NT 10.0.19042.0, x64
Host Product: Microsoft Office x64
Host Version: 16.0.14729.20260
Host Executable: MSACCESS.EXE

Immediate Window output

Note: "rddllfile" is a string constant I set to C:\Users\USERNAME\AppData\Local\Rubberduck\Rubberduck.dll to make it easier to use immediate window.

SetLaaFlag rddllfile,DisplayLaaStatusOnly
LAA is enabled.

SetLaaFlag rddllfile,TurnOffLaa
LAA is enabled.
Switching OFF LAA
(Failed error write to file)
SetLaaFlag rddllfile,TurnOnLaa
LAA is enabled.
Doing nothing

@bclothier
Copy link
Contributor Author

Mind linking to the LAA tool you're using?

@hecon5
Copy link

hecon5 commented Jan 24, 2022

Sure thing! I ran this via Excel (because you can't run this in the same Application you're trying to set), if that helps any.

Direct link to DL: modLargeAddressAware.zip
Page source: The /LARGEADDRESSAWARE (LAA) flag demystified

@bclothier
Copy link
Contributor Author

Thanks, reading the source code, it makes less sense because it really does nothing beyond reading the LAA flag from the file. I had surmised that maybe it was reporting "doing nothing" but in actuality doing something. That doesn't seems to be the case, so I'm not able to explain why just running the LAA tool affects it so. If it was simply because the LAA tool was reading the flag, then clearing the value (and failing) should have not have made it run slower again.

@hecon5
Copy link

hecon5 commented Jan 24, 2022

I agree, I am also flummoxed. But in my anecdotal test size of 1, it seemed to work.

I'll keep an eye on memory use for a bit and see if anything changes, but, other than my machine was placebo satisfied, I've got nothing.

@Tragen
Copy link

Tragen commented Jan 24, 2022

Try editbin from Visual Studio, I use this to set the flags on my software.
And compare the files between before and after editing it.

@A9G-Data-Droid
Copy link
Contributor

I used dumpbin.exe to check for the LAA flag on all DLL files in the RD directory. Then I listed only those not LAA aware to get a shorter list.

Here is what I get:

EasyHook32.dll is NOT LAA aware
EasyLoad32.dll is NOT LAA aware
ICSharpCode.AvalonEdit.dll is NOT LAA aware
Microsoft.Expression.Interactions.dll is NOT LAA aware
office.dll is NOT LAA aware
stdole.dll is NOT LAA aware
System.Windows.Interactivity.dll is NOT LAA aware

NOTE: All the main RD DLL files appear to have the LAA flag set already.

Here is the full header for the main DLL in question:

Dump of file Rubberduck.dll

PE signature found

File Type: DLL

FILE HEADER VALUES
             14C machine (x86)
               3 number of sections
        8A85333F time date stamp
               0 file pointer to symbol table
               0 number of symbols
              E0 size of optional header
            2022 characteristics
                   Executable
                   Application can handle large (>2GB) addresses
                   DLL

OPTIONAL HEADER VALUES
             10B magic # (PE32)
           48.00 linker version
           17200 size of code
             600 size of initialized data
               0 size of uninitialized data
           190A2 entry point (100190A2)
            2000 base of code
           1A000 base of data
        10000000 image base (10000000 to 1001DFFF)
            2000 section alignment
             200 file alignment
            4.00 operating system version
            0.00 image version
            6.00 subsystem version
               0 Win32 version
           1E000 size of image
             200 size of headers
               0 checksum
               3 subsystem (Windows CUI)
            8560 DLL characteristics
                   High Entropy Virtual Addresses
                   Dynamic base
                   NX compatible
                   No structured exception handler
                   Terminal Server Aware
          100000 size of stack reserve
            1000 size of stack commit
          100000 size of heap reserve
            1000 size of heap commit
               0 loader flags
              10 number of directories
               0 [       0] RVA [size] of Export Directory
           19050 [      4F] RVA [size] of Import Directory
           1A000 [     3B8] RVA [size] of Resource Directory
               0 [       0] RVA [size] of Exception Directory
               0 [       0] RVA [size] of Certificates Directory
           1C000 [       C] RVA [size] of Base Relocation Directory
           19034 [      1C] RVA [size] of Debug Directory
               0 [       0] RVA [size] of Architecture Directory
               0 [       0] RVA [size] of Global Pointer Directory
               0 [       0] RVA [size] of Thread Storage Directory
               0 [       0] RVA [size] of Load Configuration Directory
               0 [       0] RVA [size] of Bound Import Directory
            2000 [       8] RVA [size] of Import Address Table Directory
               0 [       0] RVA [size] of Delay Import Directory
            2008 [      48] RVA [size] of COM Descriptor Directory
               0 [       0] RVA [size] of Reserved Directory


SECTION HEADER #1
   .text name
   170A8 virtual size
    2000 virtual address (10002000 to 100190A7)
   17200 size of raw data
     200 file pointer to raw data (00000200 to 000173FF)
       0 file pointer to relocation table
       0 file pointer to line numbers
       0 number of relocations
       0 number of line numbers
60000020 flags
         Code
         Execute Read

  Debug Directories

        Time Type        Size      RVA  Pointer
    -------- ------- -------- -------- --------
    00000000 repro          0 00000000        0

SECTION HEADER #2
   .rsrc name
     3B8 virtual size
   1A000 virtual address (1001A000 to 1001A3B7)
     400 size of raw data
   17400 file pointer to raw data (00017400 to 000177FF)
       0 file pointer to relocation table
       0 file pointer to line numbers
       0 number of relocations
       0 number of line numbers
40000040 flags
         Initialized Data
         Read Only

SECTION HEADER #3
  .reloc name
       C virtual size
   1C000 virtual address (1001C000 to 1001C00B)
     200 size of raw data
   17800 file pointer to raw data (00017800 to 000179FF)
       0 file pointer to relocation table
       0 file pointer to line numbers
       0 number of relocations
       0 number of line numbers
42000040 flags
         Initialized Data
         Discardable
         Read Only

  Summary

        2000 .reloc
        2000 .rsrc
       18000 .text

@bclothier
Copy link
Contributor Author

As a FYI - I did the same thing and it did not make any difference in the performance.

@hecon5
Copy link

hecon5 commented Jan 24, 2022

I mean, I expect nearly no one else will have the same experience. I personally figured it would be another road to nowhere. But, if it was a fluke or not,... I swear it did do something. I haven't had an out of memory error since I did, and I was getting them left and right.

@A9G-Data-Droid
Copy link
Contributor

I think the host has to be LAA aware. All office apps are except Access, which is supposed to get LAA this September. If you have patched your MSACCESS.EXE then it would load all extensions as LAA, ready or not.

@hecon5
Copy link

hecon5 commented Jan 24, 2022

That's what I thought, except 64bit MSACCESS.EXE IS LAA already, and I was getting out of memory errors around 1GB in size. It's ... flummoxing to be sure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
critical Marks a bug as a must-fix, showstopper issue technical-debt This makes development harder or is leftover from a PullRequest. Needs to be adressed at some point.
Projects
None yet
Development

No branches or pull requests