Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a method to exclude classes by name? #247

Open
cubecull opened this issue Jan 30, 2023 · 29 comments
Open

Is there a method to exclude classes by name? #247

cubecull opened this issue Jan 30, 2023 · 29 comments

Comments

@cubecull
Copy link

My current target uses several third party libraries which I'm not interested in, but the tooling seems to be spending most of it's time crunching that data.

I'm seeing thousands of log lines like:

reasonNOTMergeClasses_Q(0x5d6800, 0xb82b58, 'xercesc_2_6::SAXParser').
Concluding factNOTMergeClasses(0x5d6800, 0xb82b58).
reasonNOTMergeClasses_Q(0x5d6800, 0xb82a50, 'xercesc_2_6::AbstractDOMParser').
Concluding factNOTMergeClasses(0x5d6800, 0xb82a50).
reasonNOTMergeClasses_Q(0x5d6800, 0xb8290c, 'DPVS::Object').
Concluding factNOTMergeClasses(0x5d6800, 0xb8290c).
reasonNOTMergeClasses_Q(0x5d6800, 0xb82b58, 'xercesc_2_6::SAXParser').
Concluding factNOTMergeClasses(0x5d6800, 0xb82b58).
reasonNOTMergeClasses_Q(0x5d6800, 0xb82a50, 'xercesc_2_6::AbstractDOMParser').

Is there any way to exclude xerces* et al. from being analysed at all stages?

@sei-eschwartz
Copy link
Collaborator

There are a few options. I think the easiest one would be to filter the symbolClass facts for xercesc_2_6 from your facts file.

The other option would be to extract a list of all the addresses that are part of that class (probably by parsing the facts file), and then pass them as arguments to --exclude when re-creating the facts file.

Just to verify, you aren't seeing the same line repeated thousands of times are you?

@cubecull
Copy link
Author

cubecull commented Jan 30, 2023

Just to verify, you aren't seeing the same line repeated thousands of times are you?

Difficult to say, there is a lot of output, but it does seem to be moving forward:

There are 240,850 known facts.
reasoningLoop: pre-reason sanityChecks
Constraint checks succeeded, proceeding to reason forward!
reasoningLoop: reasonForardAsManyTimesAsPossible
reasonForwardAsManyTimesAsPossible
Starting reasonForward.
Processing trigger fact... findint(0x4ca340, 0x5dd8f0)
Processing trigger fact... factClassSizeGTE(0x5dd8f0, 0x20)
reasonForwardAsManyTimesAsPossible complete.
reasoningLoop: post-reason sanityChecks
Constraint checks succeeded, guess accepted!
reasoningLoop: guess
Starting guess. There are currently 11,249 guesses.

facts + guesses count do increase over time.

I'll see what I can learn from the facts file and what I may be able to filter in there.

I think the easiest one would be to filter the symbolClass facts for xercesc_2_6 from your facts file.

May be worth noting that xerces and one other library seem to make up every single symbolClass fact? Is that normal?

Is there any documentation I can read on the --exclude option?

@sei-eschwartz
Copy link
Collaborator

May be worth noting that xerces and one other library seem to make up every single symbolClass fact? Is that normal?

It's not abnormal.

Is there any documentation I can read on the --exclude option?

https://github.com/cmu-sei/pharos/blob/master/share/doc/pharos_options.pod

Apparently it's --exclude-func, not --exclude

@cubecull
Copy link
Author

Ah those CLI args were what I was hunting for, thanks!

So I've tried removing all the symbolClass facts, and now I'm getting the following error. Not sure if this is because I've removed all those facts, or if I would have got this error anyway once an analysis with those facts in place got further along.

Fail-Retracting guessedNOTConstructor(0x5e6b00)...
Fail-Retracting factNOTConstructor(0x5e6b00)...
tryBinarySearch completely failed on [0x5e6b00] and will now backtrack to fix an upstream problem.
guess: We have back-tracked to the call of tryBinarySearch(tryConstructor, tryNOTConstructor, [0x5e6b00, 0x483f70, 0x403b30])
Refusing to backtrack into reasoningLoop to fix an upstream problem because backtrackForUpstream/0 is not set.
This likely indicates that there is a problem with the OO rules.
Please report this failure to the Pharos developers!
 [228] prolog_stack:get_prolog_backtrace(100,[frame(228,clause(<clause>(0x55df5d041480),6),_3385522)|_3385510],[goal_term_depth(100)]) at /usr/local/lib/swipl/library/prolog_stack.pl:137
 [227] throw_with_backtrace(error(system_error(upstreamProblem))) at /usr/local/share/pharos/prolog/oorules/util.pl:185
  [26] solve_internal at /usr/local/share/pharos/prolog/oorules/setup.pl:681
  [25] catch(user:solve_internal,_3385746,user:((_3385814=error(resource_error(private_table_space),_3385828)->complain_table_space(ooscript);_3385878=error(resource_error(stack),_3385892)->complain_stack_size(ooscript);true),throw(_3385924))) at /usr/local/lib/swipl/boot/init.pl:562
  [24] solve(ooscript) at /usr/local/share/pharos/prolog/oorules/setup.pl:617
  [23] psolve_no_halt('<garbage_collected>') at /usr/local/share/pharos/prolog/oorules/report.pl:23
  [22] catch(user:psolve_no_halt(stream(<stream>(0x55df5d06d870))),_3386098,user:(print_message(error,_3386164),(globalHalt->halt(1);true))) at /usr/local/lib/swipl/boot/init.pl:562
  [21] catch_with_backtrace('<garbage_collected>','<garbage_collected>','<garbage_collected>') at /usr/local/lib/swipl/boot/init.pl:629
  [20] run_with_backtrace('<garbage_collected>') at /usr/local/bin/ooprolog:177
  [19] <meta call>
  [18] with_output_to(<stream>(0x55df5d1d91f0),run_with_backtrace(psolve_no_halt(stream(<stream>(0x55df5d06d870))))) <foreign>
  [17] setup_call_catcher_cleanup(user:(var('ltrclnt-results-nosemantics-rose.pl')->open_null_stream(<stream>(0x55df5d1d91f0));open('ltrclnt-results-nosemantics-rose.pl',write,<stream>(0x55df5d1d91f0))),user:with_output_to(<stream>(0x55df5d1d91f0),run_with_backtrace(psolve_no_halt(stream(<stream>(0x55df5d06d870))))),_3386516,user:close(<stream>(0x55df5d1d91f0))) at /usr/local/lib/swipl/boot/init.pl:663
  [15] setup_call_catcher_cleanup(user:open('ltrclnt-facts-nosemantics-rose.pl',read,<stream>(0x55df5d06d870)),user:setup_call_cleanup((var('ltrclnt-results-nosemantics-rose.pl')->open_null_stream(<stream>(0x55df5d1d91f0));open('ltrclnt-results-nosemantics-rose.pl',write,<stream>(0x55df5d1d91f0))),with_output_to(<stream>(0x55df5d1d91f0),run_with_backtrace(psolve_no_halt(stream(<stream>(0x55df5d06d870))))),close(<stream>(0x55df5d1d91f0))),_3386726,user:close(<stream>(0x55df5d06d870))) at /usr/local/lib/swipl/boot/init.pl:663
  [12] run([script('/usr/local/bin/ooprolog'),json(_3387004),ground(_3387024),rtti(true),guess(true),config(_3387084),stacklimit(200000000000),tablespace(200000000000),oorulespath(_3387144),halt(true),load_only(false),help(_3387204),facts('ltrclnt-facts-nosemantics-rose.pl'),results('ltrclnt-results-nosemantics-rose.pl'),loglevel(6)]) at /usr/local/bin/ooprolog:235
   [9] catch(user:main(['/usr/local/bin/ooprolog','--facts','ltrclnt-facts-nosemantics-rose.pl','--results','ltrclnt-results-nosemantics-rose.pl','--log-level=6']),_3387328,user:(print_message(error,_3387458),halt(1))) at /usr/local/lib/swipl/boot/init.pl:562
   [7] catch(user:main,_3387532,'$toplevel':true) at /usr/local/lib/swipl/boot/init.pl:562
   [6] catch_with_backtrace('<garbage_collected>','<garbage_collected>','<garbage_collected>') at /usr/local/lib/swipl/boot/init.pl:

@sei-eschwartz
Copy link
Collaborator

It's hard to say. If you want to post your facts file and/or executable, I can take a closer look.

@cubecull
Copy link
Author

Would be good to try and track down why the crash occurs. The files are chonkers (which probably doesn't help); binary is 11mb, serialized file is 530mb and the facts file is 25mb. Is there an email address I can send to? Appreciate your time.

@sei-eschwartz
Copy link
Collaborator

Wow, that is quite large. If you can send the exe and facts file to [email protected], that would be great.

@cubecull
Copy link
Author

Files sent, thanks

@sei-eschwartz
Copy link
Collaborator

I have downloaded the files, thanks. Can you also provide the command line you used to produce the facts file?

@cubecull
Copy link
Author

cubecull commented Jan 31, 2023

I've lost the original args due to restarting the container, but I was following the guide for analysing larger files. I think this is what I came up with:

ooanalyzer --serialize=ltrclnt-nosemantics-rose.ser --maximum-memory 64000 --no-semantics --prolog-facts=ltrclnt-facts-nosemantics-rose.pl --threads=30 --per-function-timeout=10000000000000000 ltrclnt.exe

can't remember exactly the value for --per-function-timeout, I think I just spammed zeros.

The serialised file was made with something along these lines:

partition --serialize=ltrclnt.ser --maximum-memory=64000 --no-semantics --partitioner=rose ltrclnt.exe

Then as we discussed I went through and manually removed every symbolClass fact entry from the facts file.

@sei-eschwartz
Copy link
Collaborator

That should be close enough

@sei-eschwartz
Copy link
Collaborator

Using the facts file that you provided, I end up with:

Guessing factConstructor(0x5e6b00).
reasonNOTConstructor_G(0x5e6b00, 0xb82b8c).
Contradictory information about constructor: factConstructor(0xb82b8c) but reasonNOTConstructor(0xb82b8c)
Guessing factNOTConstructor(0x5e6b00).
reasonNOTConstructor_F(0xb82b8c, 0x5e6b00).
Contradictory information about constructor: factConstructor(0xb82b8c) but reasonNOTConstructor(0xb82b8c)

.idata:00B82B8C ; protected: __thiscall xercesc_2_6::InputSource::InputSource(class xercesc_2_6::MemoryManager * const)
.idata:00B82B8C extrn ??0InputSource@xercesc_2_6@@iae@QAVMemoryManager@1@@z:dword

So 0xb82b8c is obviously a constructor.

From reasonNOTConstructor_G:

    % Since we don't have visibility into VFTable writes from imported constructors and
    % destructors this rule does not apply to imported methods.
    not(symbolClass(Method, _, _, _)),

Oops. So apparently my suggestion to comment out the symbolClass was not so good.

@sei-eschwartz
Copy link
Collaborator

Another thing you could try is adding fail, at this line to disable the NOTMergeClasses_Q rule. You would want to use the original facts file with the symbolClass facts

@cubecull
Copy link
Author

cubecull commented Feb 2, 2023

I'm going to try the --exclude-func method you mentioned, as having all the Xerces stuff in the facts file seems to massively slow down the analysis, so even if I add fail, to the rule the analysis time (and memory usage) will still be gigantic.

Sounds like the crash I discovered is not a real case and just came from modifying the facts file, and you've given me the requested advice for how to ignore/exclude things, so happy for you to close this issue :) Thanks for your help!

Edit:

Hmm. So I recreated the facts file with many --exclude-func options passed, but I'm still seeing symbolClass definitions (at addresses I excluded) in the new facts file. And when I move on to ooprolog stage (I'm no longer passing --exclude-func here, should I be?), I can see references in the logs to addresses that were excluded in the earlier stage:

reasonNOTMergeClasses_Q(0x442ef0, 0xb82aa0, 'xercesc_2_6::HandlerBase').
Concluding factNOTMergeClasses(0x442ef0, 0xb82aa0).
reasonNOTMergeClasses_Q(0x442ef0, 0xb82ba8, 'xercesc_2_6::InputSource').
Concluding factNOTMergeClasses(0x442ef0, 0xb82ba8).

@sei-ccohen
Copy link
Contributor

So this is probably an unintentional "feature" of how --exclude-func is implemented. It excludes functions from the analysis pass that inspects the instructions in the function to determine the properties of the function. It sadly does NOT exclude the function from other parts of the overall system. For example, we should perhaps be able to exclude import descriptors, symbols, and other aspects of the function using the same command line option, but we don't do that currently.

The reason it works the way it does is mostly just historical. The option was added to work around functions that took unreasonably long to analyze, or had other problems in the analysis phase that looks at the instructions. I'd have to look more carefully at the code, but probably only about 2/3rds of the facts actually come from that analysis phase.

@cubecull
Copy link
Author

cubecull commented Feb 2, 2023

OK so my best current option is a gigantic VM and patience? All good, thanks for your time, feel free to close the issue!

@sei-eschwartz
Copy link
Collaborator

Patience is probably good. I am also trying to run the program on our machine. It's chugging along pretty well. Unfortunately, for a program this large, as long as it is visibly making progress (printing things every few seconds), I would say things are going well.

@sei-eschwartz
Copy link
Collaborator

I will leave the issue open for now. If you can't successfully run the file, I consider that an issue (and I might need to do some profiling...)

@sei-ccohen
Copy link
Contributor

Sorry I haven't been following the entire conversation well enough to say for certain. sei-eschwartz may still have other suggestions. I saw that he said your sample was quite large. There is a known problem with factNOTMergeClasses, in the sense that it takes N-squared facts to represent the non-merge facts for N unique classes. We've tried addressing this in a variety of ways, but there's no free lunch (e.g. computing each fact as you need it is slow instead of large). Apparently, you have a very large number of classes, and so a large amount of RAM, many CPUs and lots of patience may be the only choice at this point.

I just saw that eschwartz responded more quickly than I could.

@cubecull
Copy link
Author

cubecull commented Feb 2, 2023

No problem, I'll leave it running. Only thing I did note was that the logging output (and therefore possibly the analysis?) seemed to be slowing down over time.

image

At this stage the output was scrolling by quite fast, but by the time I got to about 11hrs of CPU time, 196GB used, I could read the log lines are they went by, and that's when I gave up.

I'll try again on a different VM with a higher single-core clock speed.

@sei-eschwartz
Copy link
Collaborator

So, I actually just had to kill the prolog run on our machine because it ran out of memory (~220 GiB!). I consider this an issue that we should look into. Unfortunately, I'm going to be traveling a lot and I'm not sure when I'll be able to look at it in more depth.

@sei-eschwartz
Copy link
Collaborator

No problem, I'll leave it running. Only thing I did note was that the logging output (and therefore possibly the analysis?) seemed to be slowing down over time.

At this stage the output was scrolling by quite fast, but by the time I got to about 11hrs of CPU time, 196GB used, I could read the log lines are they went by, and that's when I gave up.

Yeah, this slowing down is an expected problem, unfortunately. As we make more conclusions, everything gets slower.

But 256 GiB is probably not enough ram, since that is what our machine has. You could try a machine with more memory if you can find one, but it's clear there is a problem that is causing Prolog to use way too much memory.

@cubecull
Copy link
Author

cubecull commented Feb 2, 2023

Was considering this one
image
just a shame the prolog step isn't multithreaded!

Don't know if its even possible and I'm sure it wouldn't be quick to implement, but if there was some way to checkpoint the prolog step as it goes along I could run this on AWS Spot instances for a lot cheaper!

@sei-eschwartz
Copy link
Collaborator

That's pretty expensive. If it were me I'd wait until I can look more at the problem. There's no guarantee that 384 would even be enough...

Checkpointing would be something to suggest to the SWI Prolog maintainer...

@h5kk
Copy link

h5kk commented Jan 6, 2024

That's pretty expensive. If it were me I'd wait until I can look more at the problem. There's no guarantee that 384 would even be enough...

Checkpointing would be something to suggest to the SWI Prolog maintainer...

Hey @sei-eschwartz did you ever do some more research on this or @cubecull were you able to get it to work? I think that I am also running out of memory and it's crashing, or the log file could have just been too large (1.5GB when it crashed). Analyzing a .dll that is 7.6MB with a PDB that is 37MB.

@sei-eschwartz
Copy link
Collaborator

This fell off my radar, unfortunately.

@h5kk If you have a PDB, what do you need OOAnalyzer for?

@h5kk
Copy link

h5kk commented Jan 6, 2024

This fell off my radar, unfortunately.

@h5kk If you have a PDB, what do you need OOAnalyzer for?

I guess I misunderstood what OOAnalyzer was for and thought it could provide value beyond the PDB. I am still a bit new to RE. Thank you for the response.

@sei-eschwartz
Copy link
Collaborator

If you have a PDB, you are in great shape. Import it into Ghidra / IDA, and don't worry about OOAnalyzer. OOAnalyzer is useful for the more common case when you have an executable without a PDB.

@h5kk
Copy link

h5kk commented Jan 6, 2024

If you have a PDB, you are in great shape. Import it into Ghidra / IDA, and don't worry about OOAnalyzer. OOAnalyzer is useful for the more common case when you have an executable without a PDB.

Thank you very much.
PS: I sent you an email about something somewhat related. Hope that's okay.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants