M.I.T., Department of Mathematics
18.335: Numerical Methods of Applied Mathematics -- I, Fall 2004
Where and when: 1-390, MW 3-4:30
This course will consist of two parts.
During the first two thirds of the course we will concentrate on Numerical
Linear Algebra. We will study the solutions of linear systems of
equations, least square problems, eigenvalue problems and singular value
problems. Techniques for dense, sparse and structured problems will be
covered. Students should still come to appreciate many state-of-the-art
techniques and recognize when to consider applying them. We will also
learn basic principles applicable to a variety of numerical problems and
learn how to apply them. These principles include (1) matrix
factorizations, (2) perturbation theory and condition numbers, (3) effect
of roundoff on algorithms, including properties of floating point
arithmetic, (4) analyzing the speed of an algorithm, (5) choosing the
best algorithm for the mathematical structure of your problem, and (6)
engineering numerical software.
In addition to discussing established solution techniques, open problems
will also be presented.
The course assumes familiarity with linear algebra and will involve a
reasonable amount of programming in MATLAB.
During the second part of the course we will concentrate on numerical
methods for solving ordinary differential equations. These methods are
usually introduced in undergraduate numerical analysis courses such as
18.330. Such courses, however, are not prerequisites for 18.335. This
graduate-level exposition will be self contained and lecture notes will be
Numerical Linear Algebra, Trefethen and Bau, SIAM 1997
(Optional) Applied Numerical Linear Algebra, Demmel, SIAM 1997.
Lecture notes for Numerical ODEs will be provided
Instructor: Plamen Koev, office: 2-376,
phone: 253-5013, e-mail: plamen (at) math (dot) mit (dot) edu
Office hours: Monday 4:30-6:30
Exam Dates: November 3rd. Practice midterm:
About eight homework assignments (80%), and one in class midterm on
November 3, 2004
09/08 Introduction, Examples, Matrix-vector and Matrix-Matrix products
09/13 Orthogonal matrices, norms of vectors and matrices
09/15 The singular value decomposition
09/20 QR decomposition
09/22 Givens rotations and Householder reflectors
09/27 Least Squares Problems
09/29 Floating point arithmetic
10/04 Accuracy, Stability and Conditioning
10/06 Backward Stability
10/13 Conditioning of Least Squares Problems
10/18 Gaussian Elimination
10/20 Cholesky Factorization
10/25 Eigenvalue Problems
10/27 Hessenberg reduction
11/01 QR Iteration I
11/03 Midterm PDF
11/08 QR Iteration II -- Simultaneous iteration; Jacobi algorithm
11/10 Bisection and divide-and-conquer algorithms
11/15 SVD algorithms, LR interation, dqds
11/17 Iterative algorithms, Arnoldi
11/22 Lanczos algorithm
11/24 Conjugate gradients
11/29 ODEs Handout
12/01 Runge Kutta Methods
12/06 Solution to stiff ODEs
Demonstration software used in class:
Conjugate gradient preconditioners.
- Due: September 22.
- Let A be an orthogonal matrix. Prove that |det(A)|=1. Show that
if B is also orthogonal and det(A)=-det(B) then A+B is singular.
- Trefethen 2.5, 3.2, 3.3
- Prove that
for any x,y in Cn
- Due: September 29
- Count the number of floating point operations required to compute
the QR decomposition of an m-by-n matrix using (a) Householder
reflectors (b) Givens rotations.
- Trefethen 5.4
- If A=R+uv*, where R is upper triangular matrix and u and
v are (column) vectors, describe an algorithm to compute the QR
decomposition of A in O(n2) time.
- Given the SVD of A, compute the SVD of
A(A*A)-1A* in terms of U, Sigma and V.
- Due October 6.
- Trefethen 10.1
- Let B be an n-by-n upper bidiagonal matrix. Describe an algorithm for
computing the condition number of B measured in the infinity norm in O(n)
- Due October 20.
Solve at least 4 of the problems below.
- Trefethen 11.1
- Trefethen 13.3
- Trefethen 13.4
- Prove that (13.7) in Trefethen is valid for complex arithmetic
(all four arithmetic operations) with
|epsilon| now bounded by a modest multiple of epsilon_machine.
- Prove that in IEEE binary floating point arithmetic sqrt(x^2)
returns x exactly.
- Let a and b be positive IEEE binary floating point numbers such that
a < b < 2a.
Prove that fl(b-a)=b-a exactly.
- Due October 27.
Solve 4 of the following 5 problems:
- Trefethen 20.1
- Trefethen 21.6
- Trefethen 22.1
- Let A be symmetric and positive definite. Show that
|aij|2 < aii ajj.
- Let A and A-1 be given real n-by-n matrices. Let
B=A+xyT be a rank-one perturbation of A. Find an
O(n2) algorithm for computing B-1. Hint:
B-1 is a rank-one perturbation of A-1.
- Due November 3.
- Let A be skew Hermitian, i.e. A*=-A. Show that
(I-A)-1(I+A) is unitary.
- Trefethen 25.1
- Due November 17.
- Compute the smallest eigenvalue of the 100-by-100 Hilbert matrix
(Hint: The Hilbert matrix is also Cauchy.
The determinant of a Cauchy matrix
det C = [\prodi<
Any submatrix of a Cauchy matrix is also Cauchy. You can use Cramer's rule
in order to compute accurate formulas for H-1 and then compute
its largest eigenvalue.
- Trefethen 30.2
- Due November 24. The writeups should be brief (preferably < 1
page) and include 1) the answer 2) the method you used and why you think
it works 3) one paragraph on how you implemented the method. You need to
compute the answer with at least 9 correct decimal digits.
- Compute the smallest eigenvalue of the 100-by-100 matrix
Solution: See the solution to Problem 7.1 above.
- The infinite matrix A has entries A(1,1)=1, A(1,2)=1/2, A(2,1)=1/3,
A(1,3)=1/4, A(2,2)=1/5, A(3,1)=1/6, etc. Compute ||A||2.
Solution: See this
website, and problem 3 (and its solution) there.
- Due December 1.
- A is a square matrix of size 19,000. All entries of A are zero
except for the primes 2,3,5,7,...,212369 along the main diagonal and the
number 1 in all the positions aij with |i-j|=1,2,4,8,...,16384.
Compute the (1,1) entry of A-1.
Solution: See this
website, and problem 7 (and its solution) there.
- The 200-by-200 (diagonally dominant) matrix A has offdiagonal
entries -1/i-1/j and row sums s(i)=sum(A(i,:))=22i. Compute the
smallest in magnitude eigenvalue of A. (Hint: The diagonal entries A(i,i)
can then be computed as sum of positive (why?) numbers, thus to high
relative accuracy. Abandoning the row sums s(i), however, robs you of any
chance of computing the smallest eigenvalue accurately). You will
encounter no (true) subtractions when running Gaussian elimination,
obtaining the LU decomposition of A to high relative accuracy
componentwise. Using the LU factors to compute the inverse of A one column
at a time will not result in any subtractions either, yielding a positive
- Due December 8.
Compute the smallest eigenvalue of the 200-by-200 Pascal matrix.
Details on the Pascal matrix can be obtained
Solution: While there are many
solve this problem, one way
would be to compute the largest eigenvalue of the inverse. The inverse
(which has checkerboard sign pattern) can (once again) be computed without
performing any subtractions if one takes the correct approach. Instead of
eliminating the matrix in the typical Gaussian elimination fashion, try to
eliminate it by using only ADJACENT rows and columns. This process is
called Neville elimination. Once you eliminate the first row and first
column, you will see that the Schur complement is also a Pascal matrix of
one size less. In matrix form this elimination can be written as
L * Pn * LT
where L is lower bidiagonal matrix with ones on the main diagonal and -1
on the first subdiagonal
and P'n-1 is an n-by-n matrix with zeros in the
first row and column, except for the (1,1) entry (which equals one), and
the matrix Pn-1 in the lower right hand corner.
You can now observe (no need to prove) that if you have
(Pn-1)-1 you can compute
(Pn)-1 using the above equality
without performing any subtractions.