Skip to main content

2015 | OriginalPaper | Buchkapitel

3. Parameter Estimations

verfasst von : Li Li

Erschienen in: Selected Applications of Convex Optimization

Verlag: Springer Berlin Heidelberg

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this chapter, we study parameter estimations from the viewpoint of optimization.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
In frequentist approach, the parameter \(\boldsymbol{\theta }\) is assumed to be constants and the MLE method estimates \(\boldsymbol{\theta }\) with certain confidence. While in Bayesian approach, the parameter \(\boldsymbol{\theta }\) can be further taken as random variables having certain a priori distributions. To distinguish these two approaches, we use the form \(f(\boldsymbol{x};\boldsymbol{\theta } )\) for frequentist approach (including MLE method) and the form \(f(\boldsymbol{x}\ \vert \ \boldsymbol{\theta })\) for Bayesian approach [1] throughout this book. If we assume \(\boldsymbol{\theta }\) follows uniform distributions in \(\boldsymbol{\varTheta }\) (i.e., \(f(\boldsymbol{\theta })\) are constants in \(\varTheta\); we do not have any special a priori information about \(\boldsymbol{\theta }\)), the likelihood function under Bayesian consideration should be proportional to the likelihood function under frequentist consideration, since \(\prod _{i=1}^{n}f(\boldsymbol{x}_{i};\boldsymbol{\theta } ) \propto \prod _{i=1}^{n}f(\boldsymbol{x}_{i}\ \vert \ \boldsymbol{\theta })f(\boldsymbol{\theta })\). So, MLE and Maximum A Posteriori (MAP) estimators coincide under such assumptions.
It should be pointed out that some literatures use the form \(f(\boldsymbol{x}\ \vert \ \boldsymbol{\theta })\) for MLE method, too.
 
Literatur
1.
Zurück zum Zitat Lee, P.M.: Bayesian Statistics: An Introduction, 4th edn. Wiley, Chichester (2012) Lee, P.M.: Bayesian Statistics: An Introduction, 4th edn. Wiley, Chichester (2012)
2.
Zurück zum Zitat Lehmann, E.L., Casella, G.: Theory of Point Estimation, 2nd edn. Springer, New York (1998)MATH Lehmann, E.L., Casella, G.: Theory of Point Estimation, 2nd edn. Springer, New York (1998)MATH
3.
Zurück zum Zitat Casella, G., Berger, R.L.: Statistical Inference, 2nd edn. Duxbury and Thomson Learning, Pacific Grove (2002) Casella, G., Berger, R.L.: Statistical Inference, 2nd edn. Duxbury and Thomson Learning, Pacific Grove (2002)
6.
Zurück zum Zitat Wu, C.: On the convergence properties of the EM algorithm. Ann. Stat. 11(1), 95–103 (1983)CrossRefMATH Wu, C.: On the convergence properties of the EM algorithm. Ann. Stat. 11(1), 95–103 (1983)CrossRefMATH
7.
Zurück zum Zitat Ma, J., Xu, L., Jordan, M.: Asymptotic convergence rate of the EM algorithm for Gaussian mixtures. Neural Comput. 12(12), 2881–2907 (2000)CrossRef Ma, J., Xu, L., Jordan, M.: Asymptotic convergence rate of the EM algorithm for Gaussian mixtures. Neural Comput. 12(12), 2881–2907 (2000)CrossRef
8.
Zurück zum Zitat Neal, R.M., Hinton, G.E.: A view of the EM algorithm that justifies incremental, sparse, and other variants. In: Jordan, M.I. (ed.) Learning in Graphical Models, pp. 355–368. MIT, Cambridge (1999) Neal, R.M., Hinton, G.E.: A view of the EM algorithm that justifies incremental, sparse, and other variants. In: Jordan, M.I. (ed.) Learning in Graphical Models, pp. 355–368. MIT, Cambridge (1999)
9.
Zurück zum Zitat McLachlan, G., Krishnan, T.: The EM Algorithm and Extensions, 2nd edn. Wiley, Hoboken (2008)CrossRefMATH McLachlan, G., Krishnan, T.: The EM Algorithm and Extensions, 2nd edn. Wiley, Hoboken (2008)CrossRefMATH
10.
Zurück zum Zitat Gupta, M.R., Chen, Y.: Theory and use of the EM algorithm. Found. Trends ®; Signal Process. 4(3), 223–296 (2010) Gupta, M.R., Chen, Y.: Theory and use of the EM algorithm. Found. Trends ®; Signal Process. 4(3), 223–296 (2010)
11.
Zurück zum Zitat Hartley, H.: Maximum likelihood estimation from incomplete data. Biometrics 14(2), 174–194 (1958)CrossRefMATH Hartley, H.: Maximum likelihood estimation from incomplete data. Biometrics 14(2), 174–194 (1958)CrossRefMATH
12.
Zurück zum Zitat Dempster, A., Laird, N., Rubin, D.: Maximum likelihood from incomplete data via the EM Algorithm. J. R. Stat. Soc. Ser. B Methodol. 39(1), 1–38 (1977)MATHMathSciNet Dempster, A., Laird, N., Rubin, D.: Maximum likelihood from incomplete data via the EM Algorithm. J. R. Stat. Soc. Ser. B Methodol. 39(1), 1–38 (1977)MATHMathSciNet
13.
Zurück zum Zitat Roweis, S.: EM Algorithms for PCA and SPCA. In: Proceedings of the 1997 Conference on Advances in Neural Information Processing Systems, vol. 10, pp. 626–632 (1998), Denver, CO, USA Roweis, S.: EM Algorithms for PCA and SPCA. In: Proceedings of the 1997 Conference on Advances in Neural Information Processing Systems, vol. 10, pp. 626–632 (1998), Denver, CO, USA
14.
Zurück zum Zitat Tipping, M.E., Bishop, C.M.: Probabilistic principal compnent analysis. J. R. Stat. Soc. Ser. B 21(3), 611–622 (1999)CrossRefMathSciNet Tipping, M.E., Bishop, C.M.: Probabilistic principal compnent analysis. J. R. Stat. Soc. Ser. B 21(3), 611–622 (1999)CrossRefMathSciNet
15.
Zurück zum Zitat Jain, A.K.: Data clustering: 50 years beyond K-means. Pattern Recognit. Lett. 31(8), 651–666 (2010)CrossRef Jain, A.K.: Data clustering: 50 years beyond K-means. Pattern Recognit. Lett. 31(8), 651–666 (2010)CrossRef
16.
Zurück zum Zitat Arthur, D., Vassilvitskii, S.: How slow is the k-means method? In: Proceedings of the 32 Annual Symposium on Computational Geometry, Sedona, pp. 144–153. ACM (2006) Arthur, D., Vassilvitskii, S.: How slow is the k-means method? In: Proceedings of the 32 Annual Symposium on Computational Geometry, Sedona, pp. 144–153. ACM (2006)
17.
Zurück zum Zitat Bhat, B.R.: Maximum likelihood estimation for positively regular Markov chains. Sankhyā: Indian J. Stat. 22(3–4), 339–344 (1960)MATH Bhat, B.R.: Maximum likelihood estimation for positively regular Markov chains. Sankhyā: Indian J. Stat. 22(3–4), 339–344 (1960)MATH
Metadaten
Titel
Parameter Estimations
verfasst von
Li Li
Copyright-Jahr
2015
Verlag
Springer Berlin Heidelberg
DOI
https://doi.org/10.1007/978-3-662-46356-7_3