This repository teaches you how to write exercises for ACCESS. It also serves as a mock for testing ACCESS and related tools.
Additionally, it contains a common test harness for writing Python exercises. See the separate README for how that works.
ACCESS uses files and folders to represent a course. ACCESS can serve any
number of courses and each course is managed through a Git repository, such as
this one. A course contains assignments and an assignment contains tasks.
Courses, assignments and tasks are configured through config.toml
files in
their respective root directories.
This is crucially important to understand. Every course, assignment and task configuration specifies a slug. This should be a simple string, not containing spaces. It will be used to uniquely identify the entity in ACCESS and in the URL when displaying content in ACCESS.
Changing the slug of an existing assignment or task will mean that the this assignment/task will be disabled in ACCESS and a new one created in its place! This means all student submissions for the old one will not be visible in the newly created assignment/task.
As such, slugs should not be changed if at all possible. In principle, the slug could be changed back to the original one and the old data will be visible once again, but this kind of confusion should generally be avoided.
Note that the directory names of assignments and tasks play no part in this.
This means that if you wish to re-organize assignments and tasks in the Git
repository, that's no problem. You can move and rename assignment and task
directories with no impact on ACCESS, as long as you update the parent's folder
references in config.toml
and as long as you do not change any slugs.
Study the following three config.toml
files in this repository for in-depth
commentary on what is going on.
- Course config.toml
- Assignment config.toml
- Task config.toml
Tasks in ACCESS must specify at least a grade_command
, used to grade the
student's code in ACCESS. Optional run_command
and test_command
s may be
provided if a student may simply run their code or write their own tests.
When ACCESS executes a command, it will copy all visible files specified under
[files]
into a docker container (and also the grading
files if running
grade_command
). ACCESS will also copy any files specified in the course's
[global_files.grading]
table.
Grading in ACCESS is done in three simple steps:
- Copy the necessary files into a docker container
- Execute the
grade_command
specified by the task author and wait for it to finish - Retrieve the contents from the
grade_results.json
file created by the grading
Thus, it is entirely up to the task author how to grade solutions. Typically,
the task author will write some script that checks the student's solution,
usually using some kind of unit testing. The only requirement is that at the end
of grading, grade_result.json
is written to the working directory. The file
needs to conform to the following example, indicating how many points the
student should get, plus a list of hints:
{"points": 0.5, "hints": [null, "The return value is not 'Hello, World!'"]}
The number of hints should correspond to the number of test cases. Test cases that pass will have a null hint, tests that fail should provide a message explaining the error. In the example above, two tests were executed and the first test passed while the second failed, hence a hint was provided and 0.5 points (out of 1, presumably) were awarded.
At the moment, ACCESS will only show the first hint provided, but this may
become configurable in the future. For this reason, it's important that the
hints in grade_result.json
are sorted from highest to lowest priority. In
other words, the student will not find it very useful to receive an obscure
error message caused by a test that checks a very specific edge-case of the
requirements. Rather, the student should receive the most general hint first.
Grading in ACCESS is not quite the same as regular unit testing for the following reasons:
- The student's submission could be literally anything, so one cannot expect that even basic things like importing or parsing the solution will succeed. Thus, it is important to catch all such errors and provide appropriate hints, rather than just crashing. For Python, the provided test harness - see README - takes care of many of these problems.
- It might make sense to write much more basic tests than one would in normal programming. For example, rather than just checking whether a function returns the correct number, it might make sense to even check whether it returns a number at all, and give the student a corresponding hint if it does not.
Information on courses, assignments and tasks can be specified in multiple languages. English is required (for now).
access-cli can validate courses, assignments, and tasks. See its README.md for more information.
To validate this course or any of its assignments and tasks using access-cli
, on Linux or Mac, run:
access-cli -As "cp -R solution/* task/"
On Windows, run:
access-cli -As "xcopy solution\* task\ /E /I /Y"
Add -v
for verbose output. The -s
flag tells access-cli how to "solve" a task. For this repo, it means copying over the sample solution to the task directory.