-
-
Notifications
You must be signed in to change notification settings - Fork 690
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#48in24: Add analyzer feedback for Knapsack #2715
Comments
Hello @sanderploegsma . Happy New Year. I have some time in the next few weeks to contribute. Please let me know if this is still worth doing or feel free to suggest a different issue. Thanks 🙏 |
Hi @sougat818, I've assigned this to you to allow you to work on this one. As part mentioned in @sanderploegsma's comment above, the first part is that we'll need to decide what the analyzer should check and provide feedback for. Do you have thoughts on what the analyzer could check for? |
Hey @kahgoh , Thanks. I had a look at the analysers for existing exercises. Essential
There are existing community solutions for Recursion with Memoization. I think these should be made into a third approach in the exercise. But if the goal of this exercise is to go towards Dynamic Programming then we can prevent it. We can move this to a different conversation as well, since this would potentially change the track.
There are a few solutions using Java Streams as well.
Actionable
Celebratory
|
Thanks for looking into it and the suggestions! I can agree with AvoidHardCodedTestCases. I wasn't quite sure how NoBuiltInMethods would work here. Would it check that at least one built in method is used? I don't think this would be way to go as there could be a valid approach to solving this without using a built in method. However, it may be useful to nudge them towards using a builtin method or if there is a better way of doing something in Java.
To be honest, I don't think we should be nudging them towards Dynamic Programming. Knapsack is classed as a practice exercise. Its goal is to give students a chance to apply their Java knowledge to solve the problem. They don't have to use the Dynamic Programming approach and it is up to them to explore different ways of solving them and we have the "Dig Deeper" section that focuses more on general approaches. The analyzer tends to check for solutions that try to "cheat" (like hard coded test cases or using builtins in exercise where you're not meant to) or tries to suggest better or more idiomatic Java practices. For example, the ShouldUseStreamFilterAndCount in hamming is about the student doing something in Java (usages of |
Hey @kahgoh , Thanks for the reply. NoBuiltInMethods is mostly used in other analyzers as simply based on import statements. I am proposing we prevent usage of
and probably even
These abstract many steps of computation, which might obscure the foundational algorithmic learning goals for students. However, these are perfectly fine if we are comfortable with students leveraging any Java knowledge they have to solve the problem. My comment about ShouldUseStreamFilterAndCount was how we could implement the analyser if we wanted to nudge towards dynamic programming instead of recursion. However, if approaches are not enforced in general then please ignore |
I think its fine to let students use any of Java knowledge they want here because there isn't any algorithmic learning goals in this exercise. The track's main goal is to help students become fluent in Java, not algorithms. Other than hardcoded solutions, it seems there currently isn't much else analyzer could look for in this exercise (that's ok 😉). Did you want to continue seeing if there is something else it look for in Knapsack? Alternatively, we could just do the |
The Knapsack exercise on the Java track is being featured in the #48in24 challenge, which means that we expect an influx of students attempting to solve it during that week.
It would be nice if the exercise contains some more content by that time.
One example of this is to add analyzer feedback for the exercise, to provide automated feedback on submitted solutions.
Adding analyzer feedback is done in three steps:
.meta/design.md
file for the exercise explaining what type of feedback the analyzer should provide (example for the Leap exercise).The text was updated successfully, but these errors were encountered: