Instructors: Vincenzo Fiore, Angela Radulescu
Teaching Assistant: Kaustubh Kulkarni
Time: Mondays 1-4pm
Location: Sinai Center for Computational Psychiatry, 55 W 125th St, Floor 13
Format: This course will have a hybrid lecture and lab structure. Lectures will be held in person and will be followed by a live lab portion during which we will implement some of the key concepts covered in lecture. We encourage in person attendance, though we will make available recordings of each lecture.
Overview: At the intersection of psychology, neuroscience and AI, computational models are aimed at understanding the mechanisms underlying cognitive processes that drive behavior, and how these processes are altered in neuropsychiatric disorders. In this course, we will discuss some of the goals, foundational ideas, and technical concepts behind computational modeling. We will survey several modeling approaches, including Bayesian inference, reinforcement learning and neural modeling. And we will get hands on experience with building and fitting models to data from different modalities.
Pre-requisites: The course assumes beginner-to-intermediate proficiency in programming tools for data analysis. For each class, coding materials will be provided in MATLAB or Python. In general, materials will take the form of self-contained codebases which students can modify to suit the problem at hand. If you are unsure of the expected coding level, you are encouraged to consult with the instructors.
Final project: You can find a final project overview here.
Recommended background:
- MATLAB: Getting Started with MATLAB: Basic commands
- Matlab Programming fundamentals
- Python 101 Google Colab notebook
- The Python Tutorial
- Mathesaurus
Readings: Reading for the course (~4 hours / week) will consist of selections from two textbooks, as well as recent literature in computational psychiatry. Textbooks:
-
Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.
-
Ma, W., & Kording, K. P., & Goldreich D. (2022). Bayesian models of perception and action.
Schedule:
-
Sept. 12th: Intro. to Reinforcement learning and decision-making (S&B Ch. 1)(S&B Ch. 2)(Addicott et al.)(slides)(recording)(code)(solutions)
- History and recent developments
- Multi-armed bandits
- The explore-exploit trade-off
-
Sept. 19th: Formalizing optimal behavior (S&B Ch. 3)(S&B Ch. 4)(Zorowitz et al.)(slides)(recording)(code)(solutions)
- Markov Decision Processes (MDPs)
- Bellman Equations
- Dynamic Programming
-
Sept. 26th: Reinforcement learning in the brain (S&B Ch. 6)(Eldar et al.)(slides)(recording)(code)(solutions)
- TD-Learning
- Biological basis of TD-Learning
- Actor-Critic
-
Oct. 3rd (cancelled)
-
Oct. 17th: Multiple learning systems (S&B Ch. 8)(Gillan et al.)(slides)(recording)(code)(solutions)
- Model-free control
- Model-based control
- Hybrid approaches
-
Oct. 24th (rescheduled for Oct. 27th): Parameter estimation (M&K&G Section C)(Daw)(slides)(recording I)(recording II)(code)
- Belief updating with discrete evidence
- Probabilistic predictions
- Precision in belief updating
-
Oct. 31st: Scaling to real-world problems (S&B Ch. 9)(Radulescu et al.)(slides)(recording)(code)
- Representation learning
- Deep RL
- Partially Observable Markov Decision Processes (POMDPs)
-
Nov. 7th: Bayesian inference in the brain I (M&K&G Ch. 1)(M&K&G Ch. 2)(slides)(recording)(code)
- Belief updating with discrete evidence
- Probabilistic predictions
- Precision in belief updating
-
Nov. 21st (make-up): Bayesian inference in the brain II (M&K&G Ch. 5)(M&K&G Ch. 11)(slides)(recording)(code)
- Elements of active inference
- Hierarchical Bayesian networks
-
Dec. 5th: Modeling social agents (slides)(recording)
- Basic behavioral game theory
- Complex social interactions
-
Dec. 12th: Final presentations (project upload link)
Grading:
This is a P/F course. The course is considered passed based on attendance and participation, completion of coding exercises and the final project.