You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Unfortunately, I don't think we can do anything about this. The same thing will happen with C.
We use Criterion's hooks to report in Codewars format on certain events, but outputs from test cases are not handled there (those are directly written on its own).
Criterion has logging functions like criterion_info, but we can't use them because the test needs to be invoked with -q to disable its normal output and this flag also disables logging.
The new runner added support for .description for better test case name, but it doesn't help in this case.
This is a big setback for C/Nasm Katas, as these usually involve pointer manipulation, which can easily lead to non-catchable exceptions being thrown by the runtime environment (segmentation faults etc.). For a user who cannot adapt the test code (except maybe for the code in the Sample Tests), this makes it very difficult to hunt down any error when trying to debug.
Any suggestion as how to improve upon this situation, given the current test runners?
@kazk would it help to change the option in the Criterion framework? I could help in making a change in the framework, if it gets accepted, but only if that would get incorporated in Codewars. Or maybe the 'verbosity' flag can be used instead of the '-q' flag?
First problem is that the version we currently use doesn't have those macros (#620).
We're stuck on 2.2.x because 2.3.x was significantly slower (existing kata timeouts) with --full-stats flag which we need to report passed tests. I'll try if the latest 2.3.3 changed anything.
So I tried the latest Criterion 2.3.3 and the performance issue seems to be fixed. I think the issue I ran into was Snaipe/Criterion#248.
I'll test against existing kata and see, but we can update to 2.3.3 if it is fixed. This version has macros like cr_log_info. Unfortunately, I tried running tests without -q and found that the outputs from these macros goes to stderr so it doesn't help us for this issue. I don't think there's any option to use stdout for logging and even if it did, I doubt it'll be synchronized because the Codewars output is produced using hooks and it's not available there.
One last possibility I can think of is by configuring the internal test runner. Maybe implementing custom logger and using that instead of the hooks will work.
This is a big setback for C/Nasm Katas, as these usually involve pointer manipulation, which can easily lead to non-catchable exceptions being thrown by the runtime environment (segmentation faults etc.). For a user who cannot adapt the test code (except maybe for the code in the Sample Tests), this makes it very difficult to hunt down any error when trying to debug.
I agree it can be super difficult, but I don't think having the output synchronized will help if the program crashes so badly during test. Criterion does run each test in isolated process and protects to a certain extent. I don't know how far you can go in languages like C and NASM.
Encountered again a usecase where cr_log_info (or a similar custom function) would be very helpful to the Kata author/translator; the Criterion framework is not bad at all, and quite comfortable, except for its fixed tests. In some situation you want to repeat a test, calling a function with an argument (e.g. testing against a set of input integers) to execute multiple subtests per value, It would be very helpful to be able to group subtests for one argument, showing the value to the user.
Activity
kazk commentedon Apr 22, 2018
Unfortunately, I don't think we can do anything about this. The same thing will happen with C.
We use Criterion's hooks to report in Codewars format on certain events, but outputs from test cases are not handled there (those are directly written on its own).
Criterion has logging functions like
criterion_info
, but we can't use them because the test needs to be invoked with-q
to disable its normal output and this flag also disables logging.The new runner added support for
.description
for better test case name, but it doesn't help in this case.[-]NASM: stdout output are not synchronized[/-][+]Criterion (C/NASM): stdout output are not synchronized[/+]nomennescio commentedon Jul 3, 2019
This is a big setback for C/Nasm Katas, as these usually involve pointer manipulation, which can easily lead to non-catchable exceptions being thrown by the runtime environment (segmentation faults etc.). For a user who cannot adapt the test code (except maybe for the code in the Sample Tests), this makes it very difficult to hunt down any error when trying to debug.
Any suggestion as how to improve upon this situation, given the current test runners?
nomennescio commentedon Jul 15, 2019
@kazk would it help to change the option in the Criterion framework? I could help in making a change in the framework, if it gets accepted, but only if that would get incorporated in Codewars. Or maybe the 'verbosity' flag can be used instead of the '-q' flag?
kazk commentedon Jul 15, 2019
First problem is that the version we currently use doesn't have those macros (#620).
We're stuck on 2.2.x because 2.3.x was significantly slower (existing kata timeouts) with
--full-stats
flag which we need to report passed tests. I'll try if the latest2.3.3
changed anything.I'll also experiment with options.
kazk commentedon Sep 4, 2019
So I tried the latest Criterion
2.3.3
and the performance issue seems to be fixed. I think the issue I ran into was Snaipe/Criterion#248.I'll test against existing kata and see, but we can update to 2.3.3 if it is fixed. This version has macros like
cr_log_info
. Unfortunately, I tried running tests without-q
and found that the outputs from these macros goes to stderr so it doesn't help us for this issue. I don't think there's any option to use stdout for logging and even if it did, I doubt it'll be synchronized because the Codewars output is produced using hooks and it's not available there.One last possibility I can think of is by configuring the internal test runner. Maybe implementing custom logger and using that instead of the hooks will work.
I agree it can be super difficult, but I don't think having the output synchronized will help if the program crashes so badly during test. Criterion does run each test in isolated process and protects to a certain extent. I don't know how far you can go in languages like C and NASM.
nomennescio commentedon Sep 3, 2021
Encountered again a usecase where cr_log_info (or a similar custom function) would be very helpful to the Kata author/translator; the Criterion framework is not bad at all, and quite comfortable, except for its fixed tests. In some situation you want to repeat a test, calling a function with an argument (e.g. testing against a set of input integers) to execute multiple subtests per value, It would be very helpful to be able to group subtests for one argument, showing the value to the user.