Skip to main content
Top

2018 | OriginalPaper | Chapter

10. Analysis of High-Dimensional Regression Models Using Orthogonal Greedy Algorithms

Authors : Hsiang-Ling Hsu, Ching-Kang Ing, Tze Leung Lai

Published in: Handbook of Big Data Analytics

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

We begin by reviewing recent results of Ing and Lai (Stat Sin 21:1473–1513, 2011) on the statistical properties of the orthogonal greedy algorithm (OGA) in high-dimensional sparse regression models with independent observations. In particular, when the regression coefficients are absolutely summable, the conditional mean squared prediction error and the empirical norm of OGA derived by Ing and Lai (Stat Sin 21:1473–1513, 2011) are introduced. We then explore the performance of OGA under more general sparsity conditions. Finally, we obtain the convergence rate of OGA in high-dimensional time series models, and illustrate the advantage of our results compared to those established for Lasso by Basu and Michailidis (Ann Stat 43:1535–1567, 2015) and Wu and Wu (Electron J Stat 10:352–379, 2016).

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
go back to reference Basu S, Michailidis G (2015) Regularized estimation in sparse high-dimensional time series models. Ann Stat 43:1535–1567MathSciNetCrossRef Basu S, Michailidis G (2015) Regularized estimation in sparse high-dimensional time series models. Ann Stat 43:1535–1567MathSciNetCrossRef
go back to reference Bickel PJ, Ritov Y, Tsybakov AB (2009) Simultaneous analysis of Lasso and Dantzig selector. Ann Stat 37:1705–1732MathSciNetCrossRef Bickel PJ, Ritov Y, Tsybakov AB (2009) Simultaneous analysis of Lasso and Dantzig selector. Ann Stat 37:1705–1732MathSciNetCrossRef
go back to reference Candés EJ, Tao T (2007) The Dantzig selector: statistical estimation when p is much larger than n. Ann Stat 35:2313–2351MathSciNetCrossRef Candés EJ, Tao T (2007) The Dantzig selector: statistical estimation when p is much larger than n. Ann Stat 35:2313–2351MathSciNetCrossRef
go back to reference Cai T, Zhang C-H, Zhou HH (2010) Optimal rates of convergence for covariance matrix estimation. Ann Stat 38:2118–2144MathSciNetCrossRef Cai T, Zhang C-H, Zhou HH (2010) Optimal rates of convergence for covariance matrix estimation. Ann Stat 38:2118–2144MathSciNetCrossRef
go back to reference Donoho DL, Elad M, Temlyakov VN (2006) Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans Inform Theory 52:6–18MathSciNetCrossRef Donoho DL, Elad M, Temlyakov VN (2006) Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans Inform Theory 52:6–18MathSciNetCrossRef
go back to reference Fan J, Li R (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc 96:1348–1360MathSciNetCrossRef Fan J, Li R (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc 96:1348–1360MathSciNetCrossRef
go back to reference Findley DF, Wei C-Z (1993) Moment bounds for deriving time series CLT’s and model selection procedures. Stat Sin 3:453–470 Findley DF, Wei C-Z (1993) Moment bounds for deriving time series CLT’s and model selection procedures. Stat Sin 3:453–470
go back to reference Gao F, Ing C-K, Yang Y (2013) Metric entropy and sparse linear approximation of l q-Hulls for 0 < q ≤ 1. J Approx Theory 166:42–55 Gao F, Ing C-K, Yang Y (2013) Metric entropy and sparse linear approximation of l q-Hulls for 0 < q ≤ 1. J Approx Theory 166:42–55
go back to reference Ing C-K, Lai TL (2011) A stepwise regression method and consistent model selection for high-dimensional sparse linear models. Stat Sin 21:1473–1513 Ing C-K, Lai TL (2011) A stepwise regression method and consistent model selection for high-dimensional sparse linear models. Stat Sin 21:1473–1513
go back to reference Ing C-K, Lai TL (2015) An efficient pathwise variable selection criterion in weakly sparse regression models. Technical Report, Academia Sinica Ing C-K, Lai TL (2015) An efficient pathwise variable selection criterion in weakly sparse regression models. Technical Report, Academia Sinica
go back to reference Ing C-K, Lai TL (2016) Model selection for high-dimensional time series. Technical Report, Academia Sinica Ing C-K, Lai TL (2016) Model selection for high-dimensional time series. Technical Report, Academia Sinica
go back to reference Ing C-K, Wei C-Z (2003) On same-realization prediction in an infinite-order autoregressive process. J Multivar Anal 85:130–155MathSciNetCrossRef Ing C-K, Wei C-Z (2003) On same-realization prediction in an infinite-order autoregressive process. J Multivar Anal 85:130–155MathSciNetCrossRef
go back to reference Negahban SN, Ravikumar P, Wainwright MJ, Yu B (2012) A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. Stat Sci 27:538–557MathSciNetCrossRef Negahban SN, Ravikumar P, Wainwright MJ, Yu B (2012) A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. Stat Sci 27:538–557MathSciNetCrossRef
go back to reference Raskutti G, Wainwright MJ, Yu B (2011) Minimax rates of estimation for high-dimensional linear regression over l q-balls. IEEE Trans Inform Theory 57:6976–6994MathSciNetCrossRef Raskutti G, Wainwright MJ, Yu B (2011) Minimax rates of estimation for high-dimensional linear regression over l q-balls. IEEE Trans Inform Theory 57:6976–6994MathSciNetCrossRef
go back to reference Tibshirani R (1996) Regression shrinkage and selection via the Lasso. J R Stat Soc Ser B 58:267–288MathSciNetMATH Tibshirani R (1996) Regression shrinkage and selection via the Lasso. J R Stat Soc Ser B 58:267–288MathSciNetMATH
go back to reference Tropp JA (2004) Greed is good: algorithmic results for sparse approximation. IEEE Trans Inform Theory 50:2231–2242MathSciNetCrossRef Tropp JA (2004) Greed is good: algorithmic results for sparse approximation. IEEE Trans Inform Theory 50:2231–2242MathSciNetCrossRef
go back to reference Wang Z, Paterlini S, Gao F, Yang Y (2014) Adaptive minimax regression estimation over sparse hulls. J Mach Learn Res 15:1675–1711MathSciNetMATH Wang Z, Paterlini S, Gao F, Yang Y (2014) Adaptive minimax regression estimation over sparse hulls. J Mach Learn Res 15:1675–1711MathSciNetMATH
go back to reference Wei C-Z (1987) Adaptive prediction by least squares predictors in stochastic regression models with applications to time series. Ann Stat 15:1667–1682MathSciNetCrossRef Wei C-Z (1987) Adaptive prediction by least squares predictors in stochastic regression models with applications to time series. Ann Stat 15:1667–1682MathSciNetCrossRef
go back to reference Wu WB, Wu YN (2016) Performance bounds for parameter estimates of high-dimensional linear models with correlated errors. Electron J Stat 10:352–379MathSciNetCrossRef Wu WB, Wu YN (2016) Performance bounds for parameter estimates of high-dimensional linear models with correlated errors. Electron J Stat 10:352–379MathSciNetCrossRef
go back to reference Zhang C-H, Huang J (2008) The sparsity and bias of the Lasso selection in highdimensional linear regression. Ann Stat 36:1567–1594CrossRef Zhang C-H, Huang J (2008) The sparsity and bias of the Lasso selection in highdimensional linear regression. Ann Stat 36:1567–1594CrossRef
Metadata
Title
Analysis of High-Dimensional Regression Models Using Orthogonal Greedy Algorithms
Authors
Hsiang-Ling Hsu
Ching-Kang Ing
Tze Leung Lai
Copyright Year
2018
DOI
https://doi.org/10.1007/978-3-319-18284-1_10

Premium Partner