Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden.
powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden.
powered by
Abstract
This chapter discusses least squares regression, one of the most widely used analytics tools for building predictive models. The chapter begins by highlighting the reasons for the popularity of regression, including its logical, linear nature and its ease of programming. It emphasizes the flexibility of regression, as it can be applied to various types of problems, even those that may not initially seem suitable for linear regression.
The section on multiple regression explores how to deal with various types of variables encountered in regression applications, such as nominal, ordinal, and continuous variables. It explains how to code nominal and ordinal variables using indicator variables for meaningful inclusion in regression.
The chapter then delves into handling nonlinearity in regression, discussing how to detect and address it using polynomial models or variable transformations.
Next, the chapter focuses on evaluating the predictive accuracy of regression models using metrics such as Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE).
To illustrate the concepts discussed, KNIME is used with a subset of the Ames, Iowa, housing price data. It demonstrates the use of ordinary least squares regression, stepwise regression, and LASSO (L1 regularization) for predictive modeling and compares their prediction accuracy on a test set.
Anzeige
Bitte loggen Sie sich ein, um Zugang zu Ihrer Lizenz zu erhalten.