Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Verbose performance metrics #578

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

SylvainJoube
Copy link
Contributor

@SylvainJoube SylvainJoube commented May 7, 2024

I've added some performance metrics for the following algorithms: finding, fitting and ambiguity resolution. I think it's an easy way to have a simple evaluation of the algorithms performance, without having to open ROOT. The code compares the reconstructed track with the truth particle data and prints basic metrics (valid/duplicates/false tracks).

My code still needs a bit of refactoring (I will clean it by the end of the day). I will also write a documentation, if you agree to merge this PR, and it you think this PR was a good idea. Here is an example of output I had:

==> Statistics ... 
- read    7830 spacepoints
- read    7830 measurements
- created (cpu)  4794 seeds
- created (cpu)  5296 found tracks
- created (cpu)  5296 fitted tracks
- created (cpu)  1780 ambiguity free tracks

Performance metrics:

===== Performance metrics for finding =====
          Valid: 1786 (34%)
     Duplicates: 2170 (41%)
          Fakes: 1340 (25%)

===== Performance metrics for fitting =====
          Valid: 1786 (34%)
     Duplicates: 2170 (41%)
          Fakes: 1340 (25%)

===== Ambiguity resolution performance metrics =====
--Among the selected tracks:
  Valid quality: 0.000561798 (should be as low as possible)
          Valid: 1766 (99%)
     Duplicates: 0 (0%)
          Fakes: 14 (1%)
--Among the evicted tracks:
          Valid: 20 (1%) (not in selected tracks)
     Duplicates: 2137 (61%)
          Fakes: 1326 (38%)

===== Performance metrics for ambiguity resolution (check v2) =====
          Valid: 1766 (99%)
     Duplicates: 0 (0%)
          Fakes: 14 (1%)

I used the following command, with the added --print-performance flag:

./bin/traccc_seeding_example --input-directory=detray_simulation/toy_detector/n_particles_2000/ --detector-file=toy_detector_geometry.json --material-file=toy_detector_homogeneous_material.json --grid-file=toy_detector_surface_grids.json --input-event=1 --track-candidates-range=3:30 --constraint-step-size-mm=1000 --check-performance --print-performance

@krasznaa
Copy link
Member

krasznaa commented May 7, 2024

🤔 Over the next days I intend to put together a proposal for how I believe all of this performance measurement code should be organized. As I'm really not super happy with how it is set up at the moment.

At that point, in a couple of days, let's discuss here how your development would fit into that "new landscape". 😉

@SylvainJoube
Copy link
Contributor Author

Thanks for your feedback Attila. Okay, I’ll wait! I’m very curious about your proposal, seeing your code is always a nice way for me to discover how things should be organized and implemented! 🦊

See you here in a couple of days then :) I’ll be watching the open PRs too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants