[6.0 post-mortem] Follow-up of #2191
We are having many more interactive scenarios in the suite, and each will add a new "benchmark" to the loadgen mlperf.conf as in gpt-oss-120b-interactive instead of an optional scenarios.
E.g.:
|
deepseek-r1-interactive.Server.target_latency = 0 |
This is inconsistent with the submission checker, where the interactive is treated as a scenario. We should unify the logic
cc: @v-shobhit @pgmpablo157321