Here are the lectures, exercises, and additional course materials corresponding to the spring semester 2019 course at ETH Zurich, 227-0966-00L: Quantitative Big Imaging.
The lectures have been prepared and given by Kevin Mader and associated guest lecturers. Please note the Lecture Slides and PDF do not contain source code, this is only available in the handout file. Some of the lectures will be recorded and placed on YouTube on the QBI Playlist. The lectures are meant to be followed in chronological order and each lecture has a corresponding hands-on exercise. The entire lecture set is available as a single PDF file available in the releases section
- Ability to compare qualitative and quantitative methods and name situations where each would be appropriate
- Awareness of the standard process of image processing, the steps involved and the normal order in which they take place
- Ability to create and evaluate quantitative metrics to compare the success of different approaches/processes/workflows
- Appreciation of automation and which steps it is most appropriate for
- The relationship between automation and reproducibility for analysis
- Awareness of the function enhancement serves and the most commonly used methods
- Knowledge of limitations and new problems created when using/overusing these techniques
- Awareness of different types of segmentation approaches and strengths of each
- Understanding of when to use automatic methods and when they might fail
- Knowledge of which types of metrics are easily calculated for shapes in 2D and 3D
- Ability to describe a physical measurement problem in terms of shape metrics
- Awareness of common metrics and how they are computed for arbitrary shapes
- Awareness of common statistical techniques for hypothesis testing
- Ability to design basic experiments to test a hypothesis
- Ability to analyze and critique poorly designed imaging experiments
- Familiarity with vocabulary, tools, and main concepts of big data
- Awareness of the differences between normal and big data approaches
- Ability to explain MapReduce and apply it to a simple problem
The course is designed with both advanced undergraduate and graduate level students in mind. Ideally students will have some familiarity with basic manipulation and programming in languages like Python (Matlab or R are also reasonable starting points). Much of the material is available as visual workflows in a tool called KNIME, although these are less up to date than the Python material. Interested students who are worried about their skill level in this regard are encouraged to contact Kevin Mader directly ([email protected]).
- Students with very diverse academic backgrounds have done well in the course (Informatics to Art History to Agriculture).
- Successful students typically spent a few hours a week working on the exercises to really understand the material.
- More advanced students who are already very familiar with Python, C++, or Java are also encouraged to take the course and will have to opportunity to develop more of their own tools or explore topics like machine learning in more detail.
For communicating, discussions, asking questions, and everything, we will be trying out Slack this year. You can sign up under the following link. It isn't mandatory, but it seems to be an effective way to engage collaboratively How scientists use slack
-
Part 1: Slides (static) Lecture Handout
-
Part 2: Slides (static) Lecture Handout
-
Part 1: Slides (static) Handout
-
Part 2: Slides (static) Handout
- KNIME Exercises
- C. Elegans Dataset on Kaggle R Notebook or Python Notebook
- Lung Segmentation [https://www.kaggle.com/kmader/dsb-lung-segmentation-algorithm/notebook](Rule-based Image Processing) and Simple Neural Network
- High Content Screening Slides - Michael Prummer / Nexus / Roche
- High Content Screening with C. Elegans
- Goal is looking at what metrics accurately indicate living or dead worms and building a simple predictive model
- High Content Screening using Dask/Big Data
- Kaggle Overview
- Shape Analysis
- Processing in R
The exercises are based on the lectures and take place in the same room after the lecture completes. The exercises are designed to offer a tiered level of understanding based on the background of the student. We will (for most lectures) take advantage of an open-source tool called KNIME (www.knime.org), with example workflows here (https://www.knime.org/example-workflows). The basic exercises will require adding blocks in a workflow and adjusting parameters, while more advanced students will be able to write their own snippets, blocks or plugins to accomplish more complex tasks easily. The exercises from two years ago (available here are done entirely in ImageJ and Matlab for students who would prefer to stay in those environments (not recommended)
- Windows: https://www.dropbox.com/s/75hx7fdpnpzrh5u/knime_rsna_2018.zip?dl=0
- Mac: https://www.dropbox.com/s/3tdssp67daadzix/knime_rsna_mac.zip?dl=0
- (After you extract it move the KNIME.app into the /Applications/ folder)
If you use colab, kaggle or mybinder you won't need python on your own machine but if you want to set it up in the same way the class has you can follow the instructions shown in the video here and below
- Install Anaconda Python https://www.anaconda.com/distribution/#download-section
- Download the course from github as a zip file
- Extract the zip file
- Open a terminal (or command prompt on windows)
- Go to the binder folder inside the course directory (something like:
Downloads/Quantitative-Big-Imaging-2019-master/binder
) - Install the environment
conda env create -f environment.yml
- Activate the environment
conda activate qbi2019
oractivate qbi2019
- Go up one directory to the root of the course
cd ..
- Start python
jupyter notebook
The exercises will be supported by Amogha Pandeshwar and Kevin Mader. There will be office hours in ETZ H75 on Thursdays between 14-15 or by appointment.
The exercises will be available on Kaggle as 'Datasets' and we will be using mybinder as stated above.
- Create an issue (on the group site that everyone can see and respond to, requires a Github account), issues from last year
- Provide anonymous feedback on the course here
- Or send direct email (slightly less anonymous feedback) to Kevin
The final examination (as originally stated in the course material) will be a 30 minute oral exam covering the material of the course and its applications to real systems. For students who present a project, they will have the option to use their project for some of the real systems related questions (provided they have sent their slides to Kevin after the presentation and bring a printed out copy to the exam including several image slices if not already in the slides). The exam will cover all the lecture material from Image Enhancement to Scaling Up (the guest lecture will not be covered). Several example questions (not exhaustive) have been collected which might be helpful for preparation.
- Overview of possible projects
- Here you signup for your project with team members and a short title and description
The course, slides and exercises are primarily done using Python 3.6 and Jupyter Notebook 5.5. The binder/repo2docker-compatible environment](https://github.com/jupyter/repo2docker) can be found at binder/environment.yml. A full copy of the environment at the time the class was given is available in the wiki file. As many of these packages are frequently updated we have also made a copy of the docker image produced by repo2docker uploaded to Docker Hub at https://hub.docker.com/r/kmader/qbi2018/
The packages which are required for all lectures
- numpy
- matplotlib
- scipy
- scikit-image
- scikit-learn
- ipyvolume
For machine learning and big data lectures a few additional packages are required
- tensorflow
- pytorch
- opencv
- dask
- dask_ndmeasure
- dask_ndmorph
- dask_ndfilter
For the image registration lecture and medical image data
- itk
- SimpleITK
- itkwidgets
- Data Science/Python Introduction Handbook
- ETH Deep Learning Course taught in the Fall Semester, also uses Python but with a much more intensive mathematical grounding and less focus on images.
- EPFL Deep Learning Course taught in the Spring Semester by Francois Fleuret, uses Python and PyTorch covers theoretical topics and more advanced research topics with a number of applications and code.
- FastAI Deep Learning Course and Part 2 for a very practically focused introduction to Deep Learning using the Python skills developed in QBI.
- Deep Learning for Self-Driving Cars at MIT open to beginners and is designed for those who are new to machine learning, but it can also benefit advanced researchers in the field looking for a practical overview of deep learning methods and their application
- Reproducible Research
- Coursera Course
- Course and Tools in R
- Performance Computing Courses
- High Performance Computing for Science and Engineering (HPCSE) I
- Programming Massively Parallel Processors with CUDA
- Introduction to Machine Learning (EPFL)
Javier Montoya / Computer Vision / ScopeM
Presented by Aurelien Lucchi in Data Analytics Lab in D-INFK at ETHZ