Welcome to the course website for DPP! All material and general information will be provided here. Announcements, assignment handin, and the discussion forum remains on Absalon. While this website is a Git repository, you are not required or expected to use Git to interact with it, but feel free to do so if convenient for you.
DPP is structured around five weeks with lectures and lab sessions on Monday and Wednesday, followed by a final project to be presented orally at the exam. Throughout the course, you will hand in four weekly assignments. The assignments count for 40% of the grade, while the exam counts for 60%.
The teachers are Cosmin Oancea and Troels Henriksen.
All lectures and lab sessions will be delivered in English. The assignments and projects will be posted in English, and while you can chose to hand in solutions in either English or Danish, English is preferred.
There is no mandated textbook for the course - you will be assigned reading material from papers and such.
We have also begun work on some DPP course notes PDF, although they are currently quite embryonic, and it is not certain we will have time to add much content this year. Please let us know if there is anything in particular you would like to see.
- Monday 13:00 - 15:00 in Lille UP-1 at DIKU
- Wednesday 10:00 - 12:00 in Lille UP-1 at DIKU
- Monday 15:00 - 17:00 in 3-0-25 at DIKU.
- Wednesday 13:00 - 15:00 in Lille UP-1 at DIKU.
This course schedule is tentative and will be updated as we go along. The schedule below will become the correct one as you enter the week when the course starts.
The lab sessions are aimed at providing help for the weeklies and group project. Do not assume you can solve them without showing up to the lab sessions.
-
Theme: Intro, deterministic parallelism, data parallelism, Futhark.
-
Material:
-
Parallel Programming in Futhark (particularly Practical Matters)
-
-
Material:
-
Theme: Vector programming with ISPC
-
Material:
-
Material:
-
The Complexity of Parallel Computations (section 4.1.2)
-
Aaron Hsu's PhD dissertation (sections 3.2 and 3.3, but the lecture slides should be enough)
-
-
Demo Code
-
Facultative Material:
-
Demo Code
-
Facultative Material:
- same as above
-
Facultative Material consists of several papers:
-
Facultative Material:
- same as above
-
Material:
-
Demo Code
-
You will need the
islpy
python module ($ pip install islpy
) -
Several facultative exercises mentioned in the last section of the slides
-
-
Facultative Material:
The weekly assignments are mandatory, must be solved individually, and make up 40% of your final grade. Submission is on Absalon.
The assignment text and handouts will be linked in the schedule above.
Your TA, Anders, will be grading and providing feedback on your weekly assignments. You receive feedback within a week of the handin deadline, and one resubmission attempt is granted for each weekly assignment, which may be used to solve tasks missing in the original hand-in and/or to improve on the existing hand-in (but note that feedback may be sparse for resubmissions).
As a rule of thumb, the resubmission deadline is two weeks from the original handin deadline, but it is negotiable.
Extensions may be granted on weekly assignment (re-)submission deadlines -- please ask Anders if for any reason, personal or otherwise, you need an extension (no need to involve Cosmin or Troels unless you wish to complain about Anders' decision).
The final project, along with the exam as a whole, contributes 60% of your grade, and is done in groups of 1-3 people (although working alone is strongly discouraged). We have a tenative list of project suggestions, but you are free to suggest your own (but please talk with us first). Since the time to work on the project is rather limited, and there is no possibility of resubmission, you should ask for help early and often if you are having trouble making progress. The project should be handed in via Absalon in TBA. Send an email if you have trouble meeting this deadline.
Most of the projects are about writing some parallel program, along with a report describing the main points and challenges of the problem. The exam format is a group presentation followed by individual questions about both your project and anything else in the curriculum. Each group prepares a common presentation with slides, and each member of the group presents non-overlapping parts of the presentation for about 10 min (or less). Then each member of the group will answer individual questions for about 10 min.
Your TA is Anders Holst
([email protected], Discord: sortraev
).
Anders will be grading your weekly assignments and patrolling the
online discussion forum(s) (with help from Troels).
We provide a Discord channel for discussions, invite here: discord.gg/2wPBbAYT9G (please contact Anders should the link expire), and advice that you join as soon as possible.
We encourage active discussion of course subjects with fellow students, so long as you refrain from directly discussing or sharing solutions to weekly assignments and the exam/group project. Should you have questions pertaining to your particular solution, please ask them in a private message to Anders (your TA), who may refer you to Troels.
Please note that while we prefer Discord for communication, you are free to use the Absalon discussion forum and private messaging system, and that no announcement shall be posted to Discord which has not already been posted to Absalon.
You may find it useful to make use of DIKUs GPU machines in your work. We recommend using the so-called Hendrix cluster. If you are enrolled in the course, you should already have access. Otherwise contact Troels at [email protected]. For how to access Hendrix, follow the first link in this paragraph.
Consider using sshfs to mount the remote file system on your local machine:
$ mkdir remote
$ sshfs hendrix:/ remote
The DIKU systems have a conventional HPC modules
setup, meaning you can make
additional software available with the module
command. You may
need to do this inside SLURM jobs.
$ module load cuda
$ module load futhark
$ module load ispc
(Although there is no reason to use Hendrix for ISPC - it will run fine on your machine.)
The Futhark machines (hendrixfut01fl
, hendrixfut02fl
, hendrixfut03fl
)
have been temporarily detached from the hendrix cluster. If you do not have
accounts on them and would like to work on them, please contact the teachers
during class or e-mail them.
The available machines are equipped with top-end GPUs---Nvidia A100 GPUs on
hendrixfut01/3fl
and MI100 AMD GPU on hendrixfut02fl
---and two AMD EPYC 7352
24-Core CPUs (total 96 hardware threads). Please note that on hendrixfut02fl
you
cannot use the cuda backend of Futhark (since it has an AMD GPU) but you may
use the OpenCL backend.
To access them you need to be under VPN; if so then you can directly ssh
to them
with your ku-id and associated password.
You are not expected to read/watch the following unless otherwise noted, but they contain useful and interesting background information.
-
The Futhark User's Guide, in particular Futhark Compared to Other Functional Languages
-
The story of
ispc
(you can skip the stuff about office politics, although it might ultimately be the most valuable part of the story) -
Scientific Benchmarking of Parallel Computing Systems (we benchmark much simpler systems and don't expect anywhere near this much detail, but it's useful to have thought about it)
NVIDIA has donated GPUs used for teaching DPP for several years.
Thanks to Filippa Biil for the hedgehogs.