Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems in pascals-triangle test #929

Open
toy opened this issue Nov 14, 2023 · 12 comments
Open

Problems in pascals-triangle test #929

toy opened this issue Nov 14, 2023 · 12 comments

Comments

@toy
Copy link
Contributor

toy commented Nov 14, 2023

  1. From description I expect the tests to check the triangle, I wouldn't expect that test wants all rows of numbers to be of same length and to end with zeroes. This prevents several solutions that I imagined while solving:
    • to reserve exactly the needed space for every row (for me it failed for ten rows test)
    • even hackier solution that takes into account that rows always start and end with 1, so creating an array containing {1, 1, 2, 1, 3, 3, 1, 4, 6, 4, 1} and an array of pointers at right positions should be the right solution.
  2. Is there really a need to free in tearDown? Leaked memory should be freed anyway after all tests finish (or am I missing something?), but having expectation about used structure in tearDown may cause a segmentation fault instead of a list of failing tests in case free is called on wrong pointer. I went with reserving space for all rows in one malloc to save on calls and reduce memory usage and had problems understanding that for me the case with 0 rows was failing and causing free to cause segmentation fault. It was introduced in Update tests_pascals_triangle.c: Fix memory leaks #755 and probably better to ping @bobahop
  3. Why does zero rows expect { { 0 } } in result, shouldn't it be expecting 0 rows, or even be ok with NULL?
@ryanplusplus
Copy link
Member

2. Is there really a need to free in `tearDown`? Leaked memory should be freed anyway after all tests finish (or am I missing something?), but having expectation about used structure in `tearDown` may cause a segmentation fault instead of a list of failing tests in case `free` is called on wrong pointer. I went with reserving space for all rows in one `malloc` to save on calls and reduce memory usage and had problems understanding that for me the case with 0 rows was failing and causing `free` to cause segmentation fault. It was introduced in [Update tests_pascals_triangle.c:  Fix memory leaks #755](https://github.com/exercism/c/pull/755) and probably better to ping @bobahop

The memory will be freed when the process exits, but it means that we will fail memory checks using ASan, Valgrind, etc. If we want to remove the current restriction, we should require the implementation to provide a function like destroy_triangle() that does the frees internally instead of assuming that we know how to dealloc in tearDown.

@ryanplusplus
Copy link
Member

The memory will be freed when the process exits, but it means that we will fail memory checks using ASan, Valgrind, etc. If we want to remove the current restriction, we should require the implementation to provide a function like destroy_triangle() that does the frees internally instead of assuming that we know how to dealloc in tearDown.

Sorry, like there's already a free_triangle and the cleanup in tearDown was to avoid memory failures when tests failed. I agree with you that this is unnecessarily restrictive.

@toy
Copy link
Contributor Author

toy commented Nov 14, 2023

@ryanplusplus You mean there are memory checks as part of testing the solution? If yes I think it will be really helpful to see their output as part of test run or the check when submitting the solution to learn the need to explicitly freeing memory

@wolf99
Copy link
Contributor

wolf99 commented Nov 14, 2023

I think the make memcheck command is already documented in a few places in this repository.
Most recently it was added to exercises/shared/.doc/tests.md (if I remember the path correctly) which should be showing on the website.

@wolf99
Copy link
Contributor

wolf99 commented Nov 14, 2023

For the zeros it is a toss up between those or NULL.
I think 0 is slightly more useful as it is maybe better for printing out the triangle visually.
Regarding how we think of a triangle in an array (i.e. using either 0 or some other value to create blank space) I think this follows the problem spec, but I haven't checked that in some time.

@toy
Copy link
Contributor Author

toy commented Nov 14, 2023

I think the make memcheck command is already documented in a few places in this repository. Most recently it was added to exercises/shared/.doc/tests.md (if I remember the path correctly) which should be showing on the website.

That documentation is for doing exercises locally. What about the web editor?

@wolf99
Copy link
Contributor

wolf99 commented Nov 14, 2023

Running other make targets via the web UI will depend on the test runner in https://github.com/exercism/c-test-runner .
I'm not sure if that would be viable given the expected output formatting needs.
Perhaps @ErikSchierboom could indicate if memory analyser output details could reasonably be part of the output shown in the web UI?

@ErikSchierboom
Copy link
Member

if memory analyser output details could reasonably be part of the output shown in the web UI?

It depends. Do we need to run things twice? Will it be slow to run?

@wolf99
Copy link
Contributor

wolf99 commented Nov 17, 2023

I think that we would not need to run things twice.
The make command in the runner's run.sh would change to make memcheck.

Generally speaking (may need to run tests to prove it later), the execution time difference should not be large. If there is a big issue with a given submission I guess it would run into the allowed run time limits on the Docker infrastructure.
It would be good though to have a view of metrics such as execution time stats for the track. Possibly something that could be exposed through logging form the runner to any observability stack that Exercism infra has (e.g. Elastic and Grafana), does something like that exist already? Possiblly we could then do some A/B testing if this was an issue?

Some complexity comes when processing the results of that to present it to the user.
Using the make memcheck command (without the test runner), here is the result of breaking the freeing of allocated memory in the pascals-tiangle exercise:

./bin/run-tests -p -e pascals-triangle
Copying practice exercises pascals-triangle
Running tests for pascals_triangle
Compiling memcheck
test_pascals_triangle.c:174:test_no_rows:PASS

-----------------------
1 Tests 0 Failures 0 Ignored
OK

=================================================================
==9727==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 8 byte(s) in 1 object(s) allocated from:
    #0 0x7fe189457a57 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
    #1 0x560ca110e180 in create_triangle pascals_triangle.c:17
    #2 0x560ca110e773 in test_no_rows test_pascals_triangle.c:46
    #3 0x560ca110df17 in UnityDefaultTestRun test-framework/unity.c:1837
    #4 0x560ca110e868 in main test_pascals_triangle.c:174
    #5 0x7fe1890bdd8f  (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)

SUMMARY: AddressSanitizer: 8 byte(s) leaked in 1 allocation(s).
make: *** [makefile:28: memcheck] Error 1

As you can see the test itself passes (the data returned from the function under test does match the expected data).
But the memory analysis fails because the memory that data is stored in does not get freed before the program completes execution. Please understand that the analysis output can become a lot more complex depending on the exercise and error, and not all output would be described on a per test level (just Google for "gcc asan" and check the images of output there).

Changing this output into what is presented to the user is done by the process_results.py script.
That script would need some additions to be able parse the analysis allocation trace, with some testing to check how well that works across more complex exercises and errors. Again possibly something that could be done by exposing some metrics from the test runner in the track. That might require per-exercise breakdown of metrics on execution result, result type, and execution time, etc.

There are already past discussions on the test-runner repo in exercism/c-test-runner#10 and exercism/c-test-runner#11. There are questions there about if this should instead go in the analyzer, if it is too complex for students with less experience, and if it is supported on the infrastructure.

@ErikSchierboom
Copy link
Member

Possibly something that could be exposed through logging form the runner to any observability stack that Exercism infra has (e.g. Elastic and Grafana), does something like that exist already? Possiblly we could then do some A/B testing if this was an issue?

We don't have a direct Grafana or Elastic board, sorry. If you run things locally, what is the type of perf hit you're taking?

@wolf99
Copy link
Contributor

wolf99 commented Nov 17, 2023

Heres a quick test on my local machine using the run-tests.sh within this track repo.
The first run is make memcheck, the second uses make test

~/repos/c[main]❯ time ./bin/run-tests -e pascals-triangle

real    0m0.341s
user    0m0.040s
sys     0m0.004s

~/repos/c[main]❯ time ./bin/run-tests -e pascals-triangle

real    0m0.051s
user    0m0.003s
sys     0m0.006s

the memcheck build is slower, but we're talking milliseconds.

@ErikSchierboom
Copy link
Member

Yeah, 250ms is probably fine!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants