You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Maybe use a self-hosted runner to perform the tests and keep the files there so to minimise the number of journal files in the repo. This would however make it harder to debug failing tests locally.
Only include the journal files from new PR's that make the tests fail. Not really a fan of this as that would only look at the current tests and doesn't take into account any future test cases.
The text was updated successfully, but these errors were encountered:
It’s a difficult one, generating test events during runtime when we test isn’t feasible for most use-cases, as we want to test the lib against in-game events that actually occurred.
Removing test data has the risk of not testing against a rare/specific edge-case, and removes datapoints for tests in the future.
What’s the goal with optimising? Do you want to lower the amount of test journals in the repo, or improve CI speeds?
An idea might be to write a script that removes very similar journal entries. For example, we don’t need 10k Touchdown entries. I’d almost say we only need one; to test against the TouchdownEvent struct. Where as with ScanOrganic; the more the merrier. Although currently we pump all the events of a journal/play-session into state, and then test. Removing events from the journal/play-session would probably break things in state. For example, the exobiology tests need the Touchdown entries to track which planet the commander was on at the time of the ScanOrganic entry.
Another possible solution is having two test-sets; a wide and a deep one. Use wide test-set to test event structs, and the deep one to test state. If your goal is to improve CI times we could see what in the commit changed. If it’s state; run the deep tests, is it just structs; run the wide tests. This would not decrease the amount of test journals in the repo, though.
Well, with the contributions you and @batshalregmi have made to the test suite we already have around 700mb of files, which is already getting close to the 1gb which GitHub puts as a best practice. For now it's not really an issue yet, but just to get ahead a bit it might just be optimised a bit so that there is a strategy.
A brain dump of some options:
The text was updated successfully, but these errors were encountered: