This is the repository of course materials for the 18.335J/6.7310J course at MIT, taught by Dr. Andrew Horning, in Spring 2023.
Lectures: Tuesday/Thursday 11am–12:30pm in room 4-370
Office Hours: Wednesday and Thursday at 12:30 - 1:30 in room 2-238C.
Contact: [email protected]
Topics: Advanced introduction to numerical linear algebra and related numerical methods. Topics include direct and iterative methods for linear systems, eigenvalue decompositions and QR/SVD factorizations, stability and accuracy of numerical algorithms, the IEEE floating-point standard, sparse and structured matrices, and linear algebra software. Other topics may include memory hierarchies and the impact of caches on algorithms, nonlinear optimization, numerical integration, FFTs, and sensitivity analysis. Problem sets will involve use of Julia, a Matlab-like environment (little or no prior experience required; you will learn as you go).
Launch a Julia environment in the cloud:
Prerequisites: Understanding of linear algebra (18.06, 18.700, or equivalents). 18.335 is a graduate-level subject, however, so much more mathematical maturity, ability to deal with abstractions and proofs, and general exposure to mathematics is assumed than for 18.06!
Textbook: The primary textbook for the course is Numerical Linear Algebra by Trefethen and Bau.
Other Reading: Previous terms can be found in branches of the 18335 git repository. The course notes from 18.335 in much earlier terms can be found on OpenCourseWare. For a review of iterative methods, the online books Templates for the Solution of Linear Systems (Barrett et al.) and Templates for the Solution of Algebraic Eigenvalue Problems are useful surveys.
Grading: 40% problem sets (four psets due / released every other Friday), 30% take-home mid-term exam (First week of April), 30% final project (one-page proposal due TBD, project due TBD).
-
Psets will be submitted electronically via Gradescope (sign up for Gradescope with the entry code on Canvas). Submit a good-quality PDF scan of any handwritten solutions and also a PDF printout of a Julia notebook of your computational solutions.
-
previous midterms: fall 2008 and solutions, fall 2009 (no solutions), fall 2010 and solutions, fall 2011 and solutions, fall 2012 and solutions, fall 2013 and solutions, spring 2015 and solutions, spring 2019 and solutions, spring 2020 and solutions.
TA/grader: TBD
Collaboration policy: Talk to anyone you want to and read anything you want to, with three exceptions: First, you may not refer to homework solutions from the previous terms in which I taught 18.335. Second, make a solid effort to solve a problem on your own before discussing it with classmates or googling. Third, no matter whom you talk to or what you read, write up the solution on your own, without having their answer in front of you.
- You can use psetpartners.mit.edu to help you find classmates to chat with.
Final Projects: The final project will be an 8–15 page paper reviewing some interesting numerical algorithm not covered in the course. See the 18.335 final-projects page for more information, including topics from past semesters.
- Pset 1 is due on Friday, February 24 at 11:59pm.
This course is about Numerical Linear Algebra (NLA) and related numerical methods. But why do we need NLA? How does it fit in to other areas of computational science and engineering (CSE)? Three simple examples demonstrate how NLA problems arise naturally when solving problems drawn from across continuous applied mathematics.
- Solving Poisson's equation: from charge density to electric potential. (Linear systems: LU and Cholesky, iterative methods.)
- Dynamic Mode Decomposition: learning models from data. (Least squares: QR factorization, SVD, low-rank approximation.)
- Charge density of interacting electrons: NLA in nonlinear problems. (Eigenvalue problem: QR algorithm, iterative methods)
NLA is often applied in tandem with tools from other fields of mathematics: approximation theory, functional analysis, and statistics, to name a few. We'll focus on NLA, which is a computational workhorse within CSE. The starting point is floating point: how do we respresent real numbers on the computer?
Further Reading: L.N. Trefethen, The Definition of Numerical Analysis.
- Floating point arithmetic, exact rounding, and the "fundamental axiom"
- Catastrophic cancellation, overflow, underflow
- Forward and backward stability
- Stability of summation algorithms
Further Reading: L. N. Trefethen, Lectures 13 and 14. Also, see the notebook about floating point.
- Vector and matrix norms
- Jacobian and condition numbers
- Accuracy <= backward stable algorithms + well-conditioned problems
Further Reading: L. N. Trefethen, Lectures 12 and 15.
- Solving Ax = b
- Condition number of A
- Orthgonal/Unitary matrices
- The singular value decomposition (SVD)
** Further Reading:** L.N. Trefethen, Lectures 4 and 5.