Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden.
powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden.
powered by
Excerpt
Large or massive data sets are increasingly common and often include measurements on many variables. It is frequently possible to reduce the number of variables considerably while still retaining much of the information in the original data set. Principal component analysis (PCA) is probably the best known and most widely used dimension-reducing technique for doing this. Suppose we have n measurements on a vector x of p random variables, and we wish to reduce the dimension from p to q, where q is typically much smaller than p. PCA does this by finding linear combinations, a1′x, a2′x, …, aq′x, called principal components, that successively have maximum variance for the data, subject to being uncorrelated with previous { a}k′{ x}s. Solving this maximization problem, we find that the vectors a1, a2, …, aq are the eigenvectors of the covariance matrix, S, of the data, corresponding to the q largest eigenvalues (see Eigenvalue, Eigenvector and Eigenspace). The eigenvalues give the variances of their respective principal components, and the ratio of the sum of the first q eigenvalues to the sum of the variances of all p original variables represents the proportion of the total variance in the original data set accounted for by the first q principal components. The familiar algebraic form of PCA was first presented by Hotelling (1933), though Pearson (1901) had earlier given a geometric derivation. The apparently simple idea actually has a number of subtleties, and a surprisingly large number of uses, and has a vast literature, including at least two comprehensive textbooks (Jackson 1991; Jolliffe 2002). …
Anzeige
Bitte loggen Sie sich ein, um Zugang zu Ihrer Lizenz zu erhalten.