Skip to content

Commit aa8ae1f

Browse files
authored
Updated Dragana's summary
1 parent 01740ee commit aa8ae1f

File tree

1 file changed

+9
-0
lines changed

1 file changed

+9
-0
lines changed

content/post/tutorials/index.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -80,6 +80,15 @@ In this talk, I will present our work on automated grading of functional program
8080

8181
- {{< spoiler text="Read more" >}}
8282

83+
The talk focuses on autograding programming assignments in the context of a large (about 400 students) second year undergraduate *Software Construction* course held at EPFL. Given the size of some courses nowadays, some automation for assignement verification needs to be used; unit testing is one candidate for verifying programming assignments. However, this sometimes proves inaccurate when some cases are missed out, in addition to the problem of ignoring the source code provided by the student and only considering the input/output relations. The latter issue leads to impersonal feedback that the student cannot learn much from. The solution discussed in this tutorial proposes to use formal methods (in particular program equivallence) in addition to unit testing, for verifying programming assignments written in a subset of SCALA functional programming language.
84+
85+
Dr. Milovancevic and her colleagues implemented this solution in the context of a local automated grading system called *Stainless*, that is part of several programming courses at EPFL and works in principle based on pre- and post-conditions. However, this requires that some level of specification skills are needed. Instead, the solution discussed in the talk needs no such knowledge, only the source code the student provides: it works as a push button approach. This is a remarkable usability improvement.
86+
87+
The proposed solution returns a congratulatory message is the student's code is proved equivallent to the reference solution. It can also provide a counterexample when a wrong input is found as proof of the solution being incorrect. The student receives such feedback within minutes and can resubmit their code. Interestingly, when all solutions are in, the autograder generates for the teacher a set of clusters: one for the reference solution and all solutions equivallent to it, and as many as it finds for all the other solutions that are submitted. Some other correct solutions can be found, or incorrect solutions can be seen as equivallent to other incorrect solutions.
88+
89+
This is the beauty of this proposal: code can be written in many ways that can be deemed acceptable and verifiable. Even more, if many solutions have the same type of error, the teacher can address them systematically in the following lecture or even warn students against them the following year.
90+
91+
This proposal was integrated in the *Software Construction* course as an experiment for two years already. Thus, we have the tools to check for the diversity and creativity of programming solutions and so much more innovation can be drawn from the lessons learnt, for instance to check for students providing identical solutions.
8392

8493

8594
{{< /spoiler >}}

0 commit comments

Comments
 (0)