-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mixing of .in and .interaction samples #263
Comments
@austrin @jsannemo @simonlindholm @ghamerly Thoughts? |
To clarify, the question is whether there can be a problem where some samples have This is in fact specified in the "Interactive Problems" section. But it took way longer to find and figure this out. The relevant sections could clearly take some clarifications. (Or, we might be too tired?) |
Some more comments. (See also #265) Which files exist:
For brevity I'll refer to these as
Restriction:
What is shown in the statementFor each sample test case (defined by either a
We should probably restrict this to require consistency, so that either:
Custom output validation problems may override the default Interactive problems are allowed (but not required) to have a What is available to contestants as downloaddefault & custom validation: give
TODO: for interactive problems with
Fake interactive problems / generated input problemsJust to repeat: It's possible to have problems with on-the-fly generated input, by specifying them as an interactive type problem, but then not providing interaction with
|
Do we have examples of this actually being used in practice? There's never a case where you actually strictly need this, right? I can say I've never felt the need, and it feels like it just risks providing worse UX (e.g. having judges show confusing UI and not being able to show failing test case input/output in the same way as for other non-interactive problems) and adding a lot of bug potential around stdin/stdout fd closures, EOF checking and other termination behavior.
FWIW, https://github.com/zehnsechs/egoi-2024-testdata/tree/main/day1/gardendecorations/data/sample had this, because we wanted to have a sample test case that would be run by Kattis while also splitting the .interaction file into three because it was a multi-run problem. I'm not sure how we're envisioning samples to work for multi-run problems. |
multi-pass will use (Was that the answer you were looking for?) |
Ah, thanks, that makes sense. |
Yes, we had multiple such 'generated input' problems for BAPC, in particular where we guarantee that the input is random, and hence regenerated on each re-submission. |
Reopening, since there are still some unresolved discussions in #291. I think one thing that also isn't really specified is whether for custom output validation and interactive problems, we require that for each test case in |
Interesting. Is the problem package available for any of them? I do feel like that kind of setup is inadvisable, and it's better to keep the test data static while still giving a guarantee that it was generated at random. |
See problem With static random testdata, there is always the probability that some specific solution hits an annoying edge case, which is avoided by regenerating it each time. |
Would it be reasonable that a problem being interactive means you should only be allowed to have .interaction samples (no .in)?
If not, should you ever be allowed to mix .in and .interaction? has this ever occured?
The text was updated successfully, but these errors were encountered: