The MM and EM algorithms are hardly the only methods of optimization. Newton’s method is better known and more widely applied. We encountered Newton’s method in Section 5.4 of Chapter 5. Here we focus on the multidimensional version. Despite its defects, Newton’s method is the gold standard for speed of convergence and forms the basis of many modern optimization algorithms. Its variants seek to retain its fast convergence while taming its defects. The variants all revolve around the core idea of locally approximating the objective function by a strictly convex quadratic function. At each iteration the quadratic approximation is optimized. Safeguards are introduced to keep the iterates from veering toward irrelevant stationary points.