Skip to main content
Top

2021 | Book

Visualizing Linear Models

insite
SEARCH

About this book

This book provides a visual and intuitive coverage of the core theory of linear models. Designed to develop fluency with the underlying mathematics and to build a deep understanding of the principles, it's an excellent basis for a one-semester course on statistical theory and linear modeling for intermediate undergraduates or graduate students.

Three chapters gradually develop the essentials of linear model theory. They are each preceded by a review chapter that covers a foundational prerequisite topic. This classroom-tested work explores two distinct and complementary types of visualization: the “observations picture” and the “variables picture.” To improve retention of material, this book is supplemented by a bank of ready-made practice exercises for students. These are available for digital or print use.

Table of Contents

Frontmatter
Chapter 1. Background: Linear Algebra
Abstract
We will begin our course of study by reviewing the most relevant definitions and concepts of linear algebra. We will also expand on various aspects of orthogonal projection and spectral decomposition that are not necessarily covered in a first linear algebra course. This chapter is almost entirely self-contained as it builds from the ground up everything that we will need later in the text. Two exceptions are Theorems 1.3 and 1.4 which point to external references rather than being proven in this book or its exercise solutions.
W. D. Brinda
Chapter 2. Least-Squares Linear Regression
W. D. Brinda
Chapter 3. Background: Random Vectors
Abstract
So far, we have only worried about approximating or fitting data; we have not asked why the data looks like it does. Moving forward, we’ll use probabilistic modeling to consider possible mechanisms generating the data. In this chapter, we’ll briefly learn how to work with random vectors, the building blocks of probabilistic modeling.
W. D. Brinda
Chapter 4. Linear Models
Abstract
In Chap. 2, we learned how to do least-squares regression when the set of possible prediction vectors comprise a subspace: the least-squares prediction vector is the orthogonal projection of the data onto that subspace. Now we will formulate probabilistic models for which the possible expectation vectors of the response variable comprise a subspace. Each type of model we discuss has a counterpart in our earlier study of regression, and we will reanalyze the least-squares predictions and coefficients in the context of the model. To make it easy for the reader to compare the material from the two chapters, their structures perfectly parallel each other.
W. D. Brinda
Chapter 5. Background: Normality
Abstract
The family of Normal distributions plays a key role in the theory of probability and statistics. According to the familiar Central Limit Theorem, the distribution of an average of iid random variables (with finite variance) tends toward Normality (Pollard, A user’s guide to measure theoretic probability. Cambridge University Press, New York, 2002, Thm 7.21). In fact, more advanced versions of the theorem do not require the random variables to be iid, as long as they are not too dependent or too disparate in their scales (e.g. Pollard, A user’s guide to measure theoretic probability. Cambridge University Press, New York, 2002, Thm 8.14). We see this Central Limit phenomenon play out in the real world when we observe “bell-shaped” histograms of measurements in a wide range of contexts. The prevalence of approximate Normality in the world makes Normal distributions a natural part of statistical modeling. Fortunately, the Normal family is also mathematically convenient for analyzing estimation procedures for these models.
W. D. Brinda
Chapter 6. Normal Errors
Abstract
Throughout this chapter, we will continue analyzing the linear model from Chap. 4, but we will assume in particular that \({\boldsymbol {\epsilon }} \sim N({\mathbf {0}}, \sigma ^2 {\mathbb {I}})\) for an unknown σ 2. The figures in Chap. 4 already indicated iid Normal errors, although the results we derived in that chapter did not require such strong assumptions about the distribution of the errors. With the new Normality assumption, our earlier results remain valid of course, but we will also be able to do a good deal more in terms of inference.
W. D. Brinda
Backmatter
Metadata
Title
Visualizing Linear Models
Author
Dr. W. D. Brinda
Copyright Year
2021
Electronic ISBN
978-3-030-64167-2
Print ISBN
978-3-030-64166-5
DOI
https://doi.org/10.1007/978-3-030-64167-2

Premium Partner