You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We need another subevent example that demonstrates using ML (e.g. torch) on a GPU. This may be pretty similar to the CUDA example. The example should combine values from at least two different objects obtained from the JEvent and use those as inputs to the AI/ML model. The arrow that actually runs inference on the GPU should bundle up to N sets of inputs (grabbing what's available on the queue) in order to optimize throughput. This is likely to be the mode that will be used so it should be in the example.
Much more detailed documentation is needed for designing arrow topologies to handle subevents. The existing examples, while brief, are a bit complex by nature and have virtually no comments. Adding an "Advanced Topics" section to the Tutorial website that describes arrow topologies would be helpful as well.
The text was updated successfully, but these errors were encountered:
We need another subevent example that demonstrates using ML (e.g. torch) on a GPU. This may be pretty similar to the CUDA example. The example should combine values from at least two different objects obtained from the JEvent and use those as inputs to the AI/ML model. The arrow that actually runs inference on the GPU should bundle up to N sets of inputs (grabbing what's available on the queue) in order to optimize throughput. This is likely to be the mode that will be used so it should be in the example.
Much more detailed documentation is needed for designing arrow topologies to handle subevents. The existing examples, while brief, are a bit complex by nature and have virtually no comments. Adding an "Advanced Topics" section to the Tutorial website that describes arrow topologies would be helpful as well.
The text was updated successfully, but these errors were encountered: