Skip to main content

2023 | OriginalPaper | Buchkapitel

14. Bestimmung von Teststärke, Effektgröße und optimalem Stichprobenumfang

verfasst von : Nicola Döring

Erschienen in: Forschungsmethoden und Evaluation in den Sozial- und Humanwissenschaften

Verlag: Springer Berlin Heidelberg

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Zusammenfassung

Dieses Kapitel vermittelt folgende Lernziele: Die Teststärke definieren und Post-hoc- sowie A-priori-Teststärkeanalysen voneinander abgrenzen können. Wissen, was man unter der Effektgröße versteht und wie man sie berechnet. Verschiedene standardisierte Effektgrößenmaße unterscheiden und hinsichtlich ihrer Ausprägung als kleine, mittlere oder große Effekte einordnen können. Das Konzept des optimalen Stichprobenumfangs erläutern können. Wissen, wie man den optimalen Stichprobenumfang für Studien mit unterschiedlichen Signifikanztests im Zuge der Untersuchungsplanung festlegt.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Abelson, R. P., & Prentice, D. A. (1997). Contrast tests of interaction hypothesis. Psychological Methods, 2, 315–328. Abelson, R. P., & Prentice, D. A. (1997). Contrast tests of interaction hypothesis. Psychological Methods, 2, 315–328.
Zurück zum Zitat Aberson, C. L. (2019). Applied power analysis for the behavioral sciences (2nd ed.). New York: Psychology Press. Aberson, C. L. (2019). Applied power analysis for the behavioral sciences (2nd ed.). New York: Psychology Press.
Zurück zum Zitat Algina, J., Keselman, H. J., & Penfield, R. D. (2005). An alternative to Cohen’s standardized mean difference effect size: A robust parameter and confidence interval in the two independent groups case. Psychological Methods, 10, 317–328.PubMed Algina, J., Keselman, H. J., & Penfield, R. D. (2005). An alternative to Cohen’s standardized mean difference effect size: A robust parameter and confidence interval in the two independent groups case. Psychological Methods, 10, 317–328.PubMed
Zurück zum Zitat American Psychological Association. (2019). Publication manual of the American Psychological Association (7th ed.). Washington DC: American Psychological Association. American Psychological Association. (2019). Publication manual of the American Psychological Association (7th ed.). Washington DC: American Psychological Association.
Zurück zum Zitat Anvari, F., & Lakens, D. (2021). Using anchor-based methods to determine the smallest effect size of interest. Journal of Experimental Social Psychology, 96, 104159. Anvari, F., & Lakens, D. (2021). Using anchor-based methods to determine the smallest effect size of interest. Journal of Experimental Social Psychology, 96, 104159.
Zurück zum Zitat Bausell, B. R., & Li, Y.–F. (2003) Power analysis for experimental research: A practical guide for the biological, medical and social sciences. Cambridge: Cambridge University Press. Bausell, B. R., & Li, Y.–F. (2003) Power analysis for experimental research: A practical guide for the biological, medical and social sciences. Cambridge: Cambridge University Press.
Zurück zum Zitat Bird, K. D. (2002). Confidence intervals for effect sizes in analysis of variance. Educational and Psychological Measurement, 62, 197–226. Bird, K. D. (2002). Confidence intervals for effect sizes in analysis of variance. Educational and Psychological Measurement, 62, 197–226.
Zurück zum Zitat Bortz, J. (2005). Statistik (6. Aufl.). Berlin: Springer. Bortz, J. (2005). Statistik (6. Aufl.). Berlin: Springer.
Zurück zum Zitat Bortz, J. & Lienert, G. A. (2008). Kurzgefasste Statistik für die klinische Forschung (3. Aufl.). Heidelberg: Springer. Bortz, J. & Lienert, G. A. (2008). Kurzgefasste Statistik für die klinische Forschung (3. Aufl.). Heidelberg: Springer.
Zurück zum Zitat Bortz, J., Lienert, G. A. & Boehnke, K. (2008). Verteilungsfreie Methoden in der Biostatistik (3. Aufl.). Berlin: Springer. Bortz, J., Lienert, G. A. & Boehnke, K. (2008). Verteilungsfreie Methoden in der Biostatistik (3. Aufl.). Berlin: Springer.
Zurück zum Zitat Bortz, J. & Schuster, C. (2010). Statistik für Human- und Sozialwissenschaftler. Berlin: Springer. Bortz, J. & Schuster, C. (2010). Statistik für Human- und Sozialwissenschaftler. Berlin: Springer.
Zurück zum Zitat Brizendine, L. (2006). The female brain. New York: Broadway Books. Brizendine, L. (2006). The female brain. New York: Broadway Books.
Zurück zum Zitat Chen, H., Cohen, P., & Chen, S. (2010). How big is a big odds ratio? Interpreting the magnitudes of odds ratios in epidemiological studies. Communications in Statistics – Simulation and Computation, 39, 860–864. Chen, H., Cohen, P., & Chen, S. (2010). How big is a big odds ratio? Interpreting the magnitudes of odds ratios in epidemiological studies. Communications in Statistics – Simulation and Computation, 39, 860–864.
Zurück zum Zitat Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. The Journal of Abnormal and Social Research, 65, 145–153. Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. The Journal of Abnormal and Social Research, 65, 145–153.
Zurück zum Zitat Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New York: Erlbaum. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New York: Erlbaum.
Zurück zum Zitat Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45, 1304–1312. Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45, 1304–1312.
Zurück zum Zitat Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159.PubMed Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159.PubMed
Zurück zum Zitat Cohen, J. (1994). The earth is round (\(p<0{.}05\)). American Psychologist, 49, 997–1003. Cohen, J. (1994). The earth is round (\(p<0{.}05\)). American Psychologist, 49, 997–1003.
Zurück zum Zitat Cooper, H. (2020). Reporting quantitative research in psychology. How to meet APA style journal article reporting standards. Washington: APA. Cooper, H. (2020). Reporting quantitative research in psychology. How to meet APA style journal article reporting standards. Washington: APA.
Zurück zum Zitat Cumming, G. (2012). Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis. New York: Routledge. Cumming, G. (2012). Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis. New York: Routledge.
Zurück zum Zitat Cumming, G., & Finch, S. (2001). A primer on the understanding, use, and calculation of confidence intervals that are based on central and noncentral distribution. Educational Psychological Measurement, 61, 532–574. Cumming, G., & Finch, S. (2001). A primer on the understanding, use, and calculation of confidence intervals that are based on central and noncentral distribution. Educational Psychological Measurement, 61, 532–574.
Zurück zum Zitat Dunlop, W. P., Cortina, J. M., Vaslow, J. B., & Burke, M. J. (1996). Meta-analysis of experiments with matched groups or repeated measure designs. Psychological Methods, 1, 170–177. Dunlop, W. P., Cortina, J. M., Vaslow, J. B., & Burke, M. J. (1996). Meta-analysis of experiments with matched groups or repeated measure designs. Psychological Methods, 1, 170–177.
Zurück zum Zitat Ellis, P. D. (2010). The essential guide to effect sizes: An introduction to statistical power, meta-analysis and the interpretation of research results. Cambridge: Cambridge University Press. Ellis, P. D. (2010). The essential guide to effect sizes: An introduction to statistical power, meta-analysis and the interpretation of research results. Cambridge: Cambridge University Press.
Zurück zum Zitat Erdfelder, E. (1984). Zur Bedeutung und Kontrolle des \(\upbeta\)-Fehlers bei der inferenzstatistischen Prüfung log-linearer Modelle. Zeitschrift für Sozialpsychologie, 15, 18–32. Erdfelder, E. (1984). Zur Bedeutung und Kontrolle des \(\upbeta\)-Fehlers bei der inferenzstatistischen Prüfung log-linearer Modelle. Zeitschrift für Sozialpsychologie, 15, 18–32.
Zurück zum Zitat Erdfelder, E., Faul, F., Buchner, A. & Cüpper, L. (2010). Effektgröße und Teststärke. In H. Holling & B. Schmitz (Hrsg.), Handbuch der Psychologischen Methoden und Evaluation (S. 358–369). Göttingen: Hogrefe. Erdfelder, E., Faul, F., Buchner, A. & Cüpper, L. (2010). Effektgröße und Teststärke. In H. Holling & B. Schmitz (Hrsg.), Handbuch der Psychologischen Methoden und Evaluation (S. 358–369). Göttingen: Hogrefe.
Zurück zum Zitat Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41, 1149–1160.PubMed Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41, 1149–1160.PubMed
Zurück zum Zitat Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175–191.PubMed Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175–191.PubMed
Zurück zum Zitat Ferguson, C. J. (2009). An effect size primer: A guide for clinicians and researchers. Professional Psychology: Research and Practice, 40, 532–538. Ferguson, C. J. (2009). An effect size primer: A guide for clinicians and researchers. Professional Psychology: Research and Practice, 40, 532–538.
Zurück zum Zitat Fleiss, J. L. (1994). Measures of effect size for categorical data. In H. Cooper, & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 246–259). New York: Sage. Fleiss, J. L. (1994). Measures of effect size for categorical data. In H. Cooper, & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 246–259). New York: Sage.
Zurück zum Zitat Fowler, R. L. (1987). A general method for comparing effect-magnitudes in ANOVA designs. Educational and Psychological Measurement, 47, 361–367. Fowler, R. L. (1987). A general method for comparing effect-magnitudes in ANOVA designs. Educational and Psychological Measurement, 47, 361–367.
Zurück zum Zitat Fritz, C. O., Morris, P. E., & Richler, J. J. (2012). Effect size estimates: Current use, calculations, and interpretation. Journal of Experimental Psychology: General, 141, 2–18.PubMed Fritz, C. O., Morris, P. E., & Richler, J. J. (2012). Effect size estimates: Current use, calculations, and interpretation. Journal of Experimental Psychology: General, 141, 2–18.PubMed
Zurück zum Zitat Gatsonis, C., & Sampson, A. R. (1989). Multiple correlation: Exact power and sample size calculations. Psychological Bulletin, 106, 519–524. Gatsonis, C., & Sampson, A. R. (1989). Multiple correlation: Exact power and sample size calculations. Psychological Bulletin, 106, 519–524.
Zurück zum Zitat Gigerenzer, G. (1993). The superego, the ego, and the ID in statistical reasoning. In G. Keren, & C. Lewis (Eds.), A handbook for data analysis in the behavioral sciences. Methodological issues (pp. 311–339). Hillsdale: Erlbaum. Gigerenzer, G. (1993). The superego, the ego, and the ID in statistical reasoning. In G. Keren, & C. Lewis (Eds.), A handbook for data analysis in the behavioral sciences. Methodological issues (pp. 311–339). Hillsdale: Erlbaum.
Zurück zum Zitat Gillett, R. (1994). An average power criterion for sample size estimation. Journal of the Royal Statistical Society. Series D (The Statistician), 43, 389–394. Gillett, R. (1994). An average power criterion for sample size estimation. Journal of the Royal Statistical Society. Series D (The Statistician), 43, 389–394.
Zurück zum Zitat Gillett, R. (2003). The metric comparability of meta-analytic effect-size estimators from factorial designs. Psychological Methods, 8, 419–433.PubMed Gillett, R. (2003). The metric comparability of meta-analytic effect-size estimators from factorial designs. Psychological Methods, 8, 419–433.PubMed
Zurück zum Zitat Glass, G. V. (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5, 3–8. Glass, G. V. (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5, 3–8.
Zurück zum Zitat Grissom, R. J., & Kim, J. J. (2012). Effect sizes for research: Univariate and multivariate applications (2nd ed.). New York, NY: Routledge. Grissom, R. J., & Kim, J. J. (2012). Effect sizes for research: Univariate and multivariate applications (2nd ed.). New York, NY: Routledge.
Zurück zum Zitat Haddock, C. K., Rindskopf, D., & Shadish, W. R. (1998). Using odds ratios as effect sizes for meta-analysis of dichotomous data: A primer on methods and issues. Psychological Methods, 3, 339–353. Haddock, C. K., Rindskopf, D., & Shadish, W. R. (1998). Using odds ratios as effect sizes for meta-analysis of dichotomous data: A primer on methods and issues. Psychological Methods, 3, 339–353.
Zurück zum Zitat Hager, W. (2004). Testplanung zur statistischen Prüfung psychologischer Hypothesen. Göttingen: Hogrefe. Hager, W. (2004). Testplanung zur statistischen Prüfung psychologischer Hypothesen. Göttingen: Hogrefe.
Zurück zum Zitat Halpern, S. D., Karlawish, J. H. T., & Berlin, J. A. (2002). The continuing unethical conduct of underpowered clinical trials. Journal of the American Medical Association, 288, 358–362.PubMed Halpern, S. D., Karlawish, J. H. T., & Berlin, J. A. (2002). The continuing unethical conduct of underpowered clinical trials. Journal of the American Medical Association, 288, 358–362.PubMed
Zurück zum Zitat Hays, W. L. (1994). Statistics (5th ed.). New York: Harcourt College Publishers. Hays, W. L. (1994). Statistics (5th ed.). New York: Harcourt College Publishers.
Zurück zum Zitat Hedges, L. V. (1982). Estimation of effect size from a series of independent experiments. Psychological Bulletin, 92, 490–499. Hedges, L. V. (1982). Estimation of effect size from a series of independent experiments. Psychological Bulletin, 92, 490–499.
Zurück zum Zitat Held, U. (2010). Was ist eine „Odds Ratio“ – Und wann wird sie verwendet? Schweiz Med Forum, 10, 634–635. Held, U. (2010). Was ist eine „Odds Ratio“ – Und wann wird sie verwendet? Schweiz Med Forum, 10, 634–635.
Zurück zum Zitat Hoenig, J. M., & Heisey, D. M. (2001). The abuse of power: The pervasive fallacy of power calculations for data analysis. The American Statistician, 55, 19–24. Hoenig, J. M., & Heisey, D. M. (2001). The abuse of power: The pervasive fallacy of power calculations for data analysis. The American Statistician, 55, 19–24.
Zurück zum Zitat Hsu, L. M. (2004). Biases of success rate differences shown in binomial effect size displays. Psychological Methods, 9, 183–197.PubMed Hsu, L. M. (2004). Biases of success rate differences shown in binomial effect size displays. Psychological Methods, 9, 183–197.PubMed
Zurück zum Zitat James, D., & Drakich, J. (1993). Understanding gender differences in amount of talk. In D. Tannen (Ed.), Gender and conversational interaction (pp. 281–312). Oxford: Oxford University Press. James, D., & Drakich, J. (1993). Understanding gender differences in amount of talk. In D. Tannen (Ed.), Gender and conversational interaction (pp. 281–312). Oxford: Oxford University Press.
Zurück zum Zitat Janosky, J. E. (2002). The ethics of underpowered clinical trials. Journal of the American Medical Association, 288, 2118–2122.PubMed Janosky, J. E. (2002). The ethics of underpowered clinical trials. Journal of the American Medical Association, 288, 2118–2122.PubMed
Zurück zum Zitat Kelley, K., & Maxwell, S. E. (2003). Sample size for multiple regression: Obtaining regression coefficients that are accurate, not simply significant. Psychological Methods, 8, 305–321.PubMed Kelley, K., & Maxwell, S. E. (2003). Sample size for multiple regression: Obtaining regression coefficients that are accurate, not simply significant. Psychological Methods, 8, 305–321.PubMed
Zurück zum Zitat Keren, G., & Lewis, C. (1979). Partial omega squared for ANOVA designs. Educational and Psychological Measurement, 39, 119–128. Keren, G., & Lewis, C. (1979). Partial omega squared for ANOVA designs. Educational and Psychological Measurement, 39, 119–128.
Zurück zum Zitat Kirk, R. E. (1996). Practical significance: A concept whose time has come. Educational and Psychological Measurement, 56, 746–759. Kirk, R. E. (1996). Practical significance: A concept whose time has come. Educational and Psychological Measurement, 56, 746–759.
Zurück zum Zitat Kline, R. B. (2004). Beyond significance testing: Reforming data analysis methods in behavioral research. Washington: American Psychological Association. Kline, R. B. (2004). Beyond significance testing: Reforming data analysis methods in behavioral research. Washington: American Psychological Association.
Zurück zum Zitat Kraemer, H. C., & Thiemann, S. (1987). How many subjects? Statistical power analysis in research. Thousand Oaks: Sage. Kraemer, H. C., & Thiemann, S. (1987). How many subjects? Statistical power analysis in research. Thousand Oaks: Sage.
Zurück zum Zitat Kshirsagar, A. M. (1972). Multivariate analysis. New York: Dekker. Kshirsagar, A. M. (1972). Multivariate analysis. New York: Dekker.
Zurück zum Zitat Kuechler, M. (1980). The analysis of nonmetric data. The relation of dummy dependent variable regression using an additive-saturated Grizzle-Starmer-Koch model. Sociological Methods & Research, 8, 369–388. Kuechler, M. (1980). The analysis of nonmetric data. The relation of dummy dependent variable regression using an additive-saturated Grizzle-Starmer-Koch model. Sociological Methods & Research, 8, 369–388.
Zurück zum Zitat Lakens, D., Scheel, A. M., & Isager, P. M. (2018). Equivalence testing for psychological research: A tutorial. Advances in Methods and Practices in Psychological Science, 1, 259–269. Lakens, D., Scheel, A. M., & Isager, P. M. (2018). Equivalence testing for psychological research: A tutorial. Advances in Methods and Practices in Psychological Science, 1, 259–269.
Zurück zum Zitat Leonhart, R. (2009). Statistik – Einstieg und Vertiefung (2. Aufl.). Bern: Huber. Leonhart, R. (2009). Statistik – Einstieg und Vertiefung (2. Aufl.). Bern: Huber.
Zurück zum Zitat Levine, M., & Ensom, M. H. H. (2001). Post hoc power analysis: An idea whose time has passed? Pharmacotherapy: The Journal of Human Pharmacology and Drug Therapy, 21, 405–409. Levine, M., & Ensom, M. H. H. (2001). Post hoc power analysis: An idea whose time has passed? Pharmacotherapy: The Journal of Human Pharmacology and Drug Therapy, 21, 405–409.
Zurück zum Zitat Lilford, R., & Stevens, A. J. (2002). Underpowered studies. British Journal of Surgery, 89, 129–131.PubMed Lilford, R., & Stevens, A. J. (2002). Underpowered studies. British Journal of Surgery, 89, 129–131.PubMed
Zurück zum Zitat Lipsey, M. W. (1997). Design sensitivity: Statistical power for applied experimental research. In L. Bickman, & D. Rog (Eds.), Handbook of applied social research methods (pp. 39–68). Thousand Oaks: Sage. Lipsey, M. W. (1997). Design sensitivity: Statistical power for applied experimental research. In L. Bickman, & D. Rog (Eds.), Handbook of applied social research methods (pp. 39–68). Thousand Oaks: Sage.
Zurück zum Zitat Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks: Sage. Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks: Sage.
Zurück zum Zitat Maxwell, S. E. (2000). Sample size and multiple regression analysis. Psychological Methods, 5, 434–458.PubMed Maxwell, S. E. (2000). Sample size and multiple regression analysis. Psychological Methods, 5, 434–458.PubMed
Zurück zum Zitat Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: Causes, consequences, and remedies. Psychological Methods, 9, 147–163.PubMed Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: Causes, consequences, and remedies. Psychological Methods, 9, 147–163.PubMed
Zurück zum Zitat Mehl, M. R., Vazire, S., Ramirez–Esparza, N., Slatcher, R. B., & Pennebaker, J. W. (2007). Are women really more talkative than men? Science, 317, 82.PubMed Mehl, M. R., Vazire, S., Ramirez–Esparza, N., Slatcher, R. B., & Pennebaker, J. W. (2007). Are women really more talkative than men? Science, 317, 82.PubMed
Zurück zum Zitat Mendoza, J. L., & Stafford, K. L. (2001). Confidence intervals, power calculation, and sample size estimation for the squared multiple correlation coefficient under the fixed and random regression models: A computer program and useful standard tables. Educational and Psychological Measurement, 61, 650–667. Mendoza, J. L., & Stafford, K. L. (2001). Confidence intervals, power calculation, and sample size estimation for the squared multiple correlation coefficient under the fixed and random regression models: A computer program and useful standard tables. Educational and Psychological Measurement, 61, 650–667.
Zurück zum Zitat Murphy, K. R., Myors, B., & Wolach, A. (2014). Statistical power analysis: A simple and general model for traditional and modern hypothesis tests (4th ed.). Milton Park: Routledge. Murphy, K. R., Myors, B., & Wolach, A. (2014). Statistical power analysis: A simple and general model for traditional and modern hypothesis tests (4th ed.). Milton Park: Routledge.
Zurück zum Zitat Nickerson, R. S. (2000). Null hypothesis significance testing: A review of an old and continuing controversy. Psychological Methods, 5, 241–301.PubMed Nickerson, R. S. (2000). Null hypothesis significance testing: A review of an old and continuing controversy. Psychological Methods, 5, 241–301.PubMed
Zurück zum Zitat Olejnik, S., & Algina, J. (2003). Generalized eta and omega squared statistics: Measures of effect size for some common research designs. Psychological Methods, 8, 434–477.PubMed Olejnik, S., & Algina, J. (2003). Generalized eta and omega squared statistics: Measures of effect size for some common research designs. Psychological Methods, 8, 434–477.PubMed
Zurück zum Zitat Peng, C.-Y. J., Long, H., & Abaci, S. (2012). Power analysis software for educational researchers. The Journal of Experimental Education, 80, 113–136. Peng, C.-Y. J., Long, H., & Abaci, S. (2012). Power analysis software for educational researchers. The Journal of Experimental Education, 80, 113–136.
Zurück zum Zitat Reedera, H. M. (1996). A critical look at gender difference in communication research. Communication Studies, 47, 318–330. Reedera, H. M. (1996). A critical look at gender difference in communication research. Communication Studies, 47, 318–330.
Zurück zum Zitat Rosenthal, M. C. (1994). The fugitive literature. In H. Cooper, & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 85–94). Thousand Oaks: Sage. Rosenthal, M. C. (1994). The fugitive literature. In H. Cooper, & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 85–94). Thousand Oaks: Sage.
Zurück zum Zitat Rosenthal, R., & Rubin, D. B. (1982). A simple, general purpose display of magnitudes of experimental effect. Journal of Educational Psychology, 74, 166–169. Rosenthal, R., & Rubin, D. B. (1982). A simple, general purpose display of magnitudes of experimental effect. Journal of Educational Psychology, 74, 166–169.
Zurück zum Zitat Sachs, L. (2002). Statistische Auswertungsmethoden (10. Aufl.). Berlin: Springer. Sachs, L. (2002). Statistische Auswertungsmethoden (10. Aufl.). Berlin: Springer.
Zurück zum Zitat Sedlmeier, P., & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies? Psychological Bulletin, 105, 309–316. Sedlmeier, P., & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies? Psychological Bulletin, 105, 309–316.
Zurück zum Zitat Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.
Zurück zum Zitat Smithson, M. J. (2003). Confidence intervals. Thousand Oaks: Sage. Smithson, M. J. (2003). Confidence intervals. Thousand Oaks: Sage.
Zurück zum Zitat Steiger, J. H. (2004). Beyond the \(F\) Test: Effect size confidence intervals and tests of close fit in the analysis of variance and contrast analysis. Psychological Methods, 9, 164–182.PubMed Steiger, J. H. (2004). Beyond the \(F\) Test: Effect size confidence intervals and tests of close fit in the analysis of variance and contrast analysis. Psychological Methods, 9, 164–182.PubMed
Zurück zum Zitat Thompson, B. (2002). „Statistical“, „practical“, and „clinical“: How many kinds of significance do counselors need to consider? Journal of Counseling & Development, 80, 64–71. Thompson, B. (2002). „Statistical“, „practical“, and „clinical“: How many kinds of significance do counselors need to consider? Journal of Counseling & Development, 80, 64–71.
Zurück zum Zitat Weber, R., & Popova, L. (2012). Testing equivalence in communication research: Theory and application. Communication Methods and Measures, 6, 190–213. Weber, R., & Popova, L. (2012). Testing equivalence in communication research: Theory and application. Communication Methods and Measures, 6, 190–213.
Zurück zum Zitat Westermann, R. (2000). Wissenschaftstheorie und Experimentalmethodik. Ein Lehrbuch zur Psychologischen Methodenlehre. Göttingen: Hogrefe. Westermann, R. (2000). Wissenschaftstheorie und Experimentalmethodik. Ein Lehrbuch zur Psychologischen Methodenlehre. Göttingen: Hogrefe.
Zurück zum Zitat Wilkinson, L., & Task Force on Statistical Inference. (1999). Statistical methods in psychological journals. Guidelines and explanations. American Psychologist, 54, 594–604. Wilkinson, L., & Task Force on Statistical Inference. (1999). Statistical methods in psychological journals. Guidelines and explanations. American Psychologist, 54, 594–604.
Zurück zum Zitat Winer, B. J., Brown, D. R., & Michels, K. M. (1991). Statistical principles in experimental design. New York: McGraw-Hill. Winer, B. J., Brown, D. R., & Michels, K. M. (1991). Statistical principles in experimental design. New York: McGraw-Hill.
Metadaten
Titel
Bestimmung von Teststärke, Effektgröße und optimalem Stichprobenumfang
verfasst von
Nicola Döring
Copyright-Jahr
2023
Verlag
Springer Berlin Heidelberg
DOI
https://doi.org/10.1007/978-3-662-64762-2_14