Skip to main content
Top

2016 | OriginalPaper | Chapter

14. Bestimmung von Teststärke, Effektgröße und optimalem Stichprobenumfang

Authors : Nicola Döring, Jürgen Bortz

Published in: Forschungsmethoden und Evaluation in den Sozial- und Humanwissenschaften

Publisher: Springer Berlin Heidelberg

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Zusammenfassung

Dieses Kapitel vermittelt folgende Lernziele: Die Teststärke definieren und Post-hoc- sowie A-priori-Teststärkeanalysen voneinander abgrenzen können. Wissen, was man unter der Effektgröße versteht und wie man sie berechnet. Verschiedene standardisierte Effektgrößenmaße unterscheiden und hinsichtlich ihrer Ausprägung als kleine, mittlere oder große Effekte einordnen können. Das Konzept des optimalen Stichprobenumfangs erläutern können. Wissen, wie man den optimalen Stichprobenumfang für Studien mit unterschiedlichen Signifikanztests im Zuge der Untersuchungsplanung festlegt.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
go back to reference Abelson, R. P. & Prentice, D. A. (1997). Contrast tests of interaction hypothesis. Psychological Methods, 2(4), 315–328. Abelson, R. P. & Prentice, D. A. (1997). Contrast tests of interaction hypothesis. Psychological Methods, 2(4), 315–328.
go back to reference Aberson, C. L. (2010). Applied power analysis for the behavioral sciences. New York: Psychology Press. Aberson, C. L. (2010). Applied power analysis for the behavioral sciences. New York: Psychology Press.
go back to reference Algina, J., Keselman, H. J., & Penfield, R. D. (2005). An alternative to Cohen’s standardized mean difference effect size: A robust parameter and confidence interval in the two independent group case. Psychological Methods, 10(3), 317–328. Algina, J., Keselman, H. J., & Penfield, R. D. (2005). An alternative to Cohen’s standardized mean difference effect size: A robust parameter and confidence interval in the two independent group case. Psychological Methods, 10(3), 317–328.
go back to reference APA, American Psychological Association. (2001). Publication manual of the American Psychological Association (5th ed.). Washington DC: American Psychological Association. APA, American Psychological Association. (2001). Publication manual of the American Psychological Association (5th ed.). Washington DC: American Psychological Association.
go back to reference APA, American Psychological Association. (2009). Publication manual of the American Psychological Association (6th ed.). Washington DC: American Psychological Association. APA, American Psychological Association. (2009). Publication manual of the American Psychological Association (6th ed.). Washington DC: American Psychological Association.
go back to reference Bausell, B. R. & Li, Y.–F. (2003) Power analysis for experimental research: a practical guide for the biological, medical and social sciences. Cambridge: Cambridge University Press. Bausell, B. R. & Li, Y.–F. (2003) Power analysis for experimental research: a practical guide for the biological, medical and social sciences. Cambridge: Cambridge University Press.
go back to reference Bird, K. D. (2002). Confidence intervalls for effect sizes in analysis of variance. Educational Psychological Measurement, 62(2), 197–226. Bird, K. D. (2002). Confidence intervalls for effect sizes in analysis of variance. Educational Psychological Measurement, 62(2), 197–226.
go back to reference Bortz, J. (2005). Statistik (6. Aufl.). Berlin: Springer. Bortz, J. (2005). Statistik (6. Aufl.). Berlin: Springer.
go back to reference Bortz, J. & Lienert, G. A. (2003). Kurzgefasste Statistik für die klinische Forschung (2. Aufl.). Berlin: Springer. Bortz, J. & Lienert, G. A. (2003). Kurzgefasste Statistik für die klinische Forschung (2. Aufl.). Berlin: Springer.
go back to reference Bortz, J. & Lienert, G. A. (2008). Kurzgefasste Statistik für die klinische Forschung (3. Aufl.). Heidelberg: Springer. Bortz, J. & Lienert, G. A. (2008). Kurzgefasste Statistik für die klinische Forschung (3. Aufl.). Heidelberg: Springer.
go back to reference Bortz, J., Lienert, G. A., & Boehnke, K. (2008). Verteilungsfreie Methoden in der Biostatistik (3. Aufl.). Berlin: Springer. Bortz, J., Lienert, G. A., & Boehnke, K. (2008). Verteilungsfreie Methoden in der Biostatistik (3. Aufl.). Berlin: Springer.
go back to reference Bortz, J. & Schuster, C. (2010). Statistik für Human- und Sozialwissenschaftler. Berlin: Springer. Bortz, J. & Schuster, C. (2010). Statistik für Human- und Sozialwissenschaftler. Berlin: Springer.
go back to reference Brizendine, L. (2006). The female brain. New York: Broadway Books. Brizendine, L. (2006). The female brain. New York: Broadway Books.
go back to reference Chen, H., Cohen, P., & Chen, S. (2010). How big is a big odds ratio? Interpreting the magnitudes of odds ratios in epidemiological studies. Communications in Statistics – Simulation and Computation 39(4), 860–864. Chen, H., Cohen, P., & Chen, S. (2010). How big is a big odds ratio? Interpreting the magnitudes of odds ratios in epidemiological studies. Communications in Statistics – Simulation and Computation 39(4), 860–864.
go back to reference Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. Journal of Abnormal and Social Research, 65(3), 145–153. Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. Journal of Abnormal and Social Research, 65(3), 145–153.
go back to reference Cohen, J. (1988). Statistical power analysis for the behavioral sciences. New York: Erlbaum. Cohen, J. (1988). Statistical power analysis for the behavioral sciences. New York: Erlbaum.
go back to reference Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45(12), 1304–1312. Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45(12), 1304–1312.
go back to reference Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159. Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159.
go back to reference Cohen, J. (1994). The earth is round (p \(<\) 0.05). American Psychologist, 49(12), 997–1003. Cohen, J. (1994). The earth is round (p \(<\) 0.05). American Psychologist, 49(12), 997–1003.
go back to reference Cramer, E. M. & Nicewander, W. A. (1979). Some symmetric, invariant measures of multivariate association. Psychometrika, 44(1), 43–54. Cramer, E. M. & Nicewander, W. A. (1979). Some symmetric, invariant measures of multivariate association. Psychometrika, 44(1), 43–54.
go back to reference Cumming, G. (2012). Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis. New York: Routledge. Cumming, G. (2012). Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis. New York: Routledge.
go back to reference Cumming, G. & Finch, S. (2001). A primer on the understanding, Use, and calculation of confidence intervals that are based on central and noncentral distribution. Educational Psychological Measurement, 61(4), 532–574. Cumming, G. & Finch, S. (2001). A primer on the understanding, Use, and calculation of confidence intervals that are based on central and noncentral distribution. Educational Psychological Measurement, 61(4), 532–574.
go back to reference Dunlop, W. P., Cortina, J. M., Vaslow, J. B., & Burke, M. J. (1996). Meta-analysis of experiments with matched groups or repeated measure designs. Psychological Methods, 1(2), 170–177. Dunlop, W. P., Cortina, J. M., Vaslow, J. B., & Burke, M. J. (1996). Meta-analysis of experiments with matched groups or repeated measure designs. Psychological Methods, 1(2), 170–177.
go back to reference Ellis, P. D. (2010). The essential guide to effect sizes: an introduction to statistical power, meta-analysis and the interpretation of research results. Cambridge Cambridge University Press. Ellis, P. D. (2010). The essential guide to effect sizes: an introduction to statistical power, meta-analysis and the interpretation of research results. Cambridge Cambridge University Press.
go back to reference Erdfelder, E. (1984). Zur Bedeutung und Kontrolle des beta-Fehlers bei der inferenzstatistischen Prüfung log-linearer Modelle. Zeitschrift für Sozialpsychologie, 15(1), 18–32. Erdfelder, E. (1984). Zur Bedeutung und Kontrolle des beta-Fehlers bei der inferenzstatistischen Prüfung log-linearer Modelle. Zeitschrift für Sozialpsychologie, 15(1), 18–32.
go back to reference Erdfelder, E., Faul, F., Buchner, A., & Cüpper, L. (2010). Effektgröße und Teststärke. In H. Holling & B. Schmitz (Hrsg.), Handbuch der Psychologischen Methoden und Evaluation (S. 358–369). Göttingen: Hogrefe. Erdfelder, E., Faul, F., Buchner, A., & Cüpper, L. (2010). Effektgröße und Teststärke. In H. Holling & B. Schmitz (Hrsg.), Handbuch der Psychologischen Methoden und Evaluation (S. 358–369). Göttingen: Hogrefe.
go back to reference Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191.
go back to reference Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160. Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160.
go back to reference Ferguson, C. J. (2009). An effect size primer: A guide for clinicians and researchers. Professional Psychology: Research and Practice, 40(5), 532–538. Ferguson, C. J. (2009). An effect size primer: A guide for clinicians and researchers. Professional Psychology: Research and Practice, 40(5), 532–538.
go back to reference Fleiss, J. L. (1994). Measures of effect size for categorical data. In H. Cooper & L. V. Hedges (Eds.), The Handbook of Research Synthesis (pp. 246–259). New York: Sage. Fleiss, J. L. (1994). Measures of effect size for categorical data. In H. Cooper & L. V. Hedges (Eds.), The Handbook of Research Synthesis (pp. 246–259). New York: Sage.
go back to reference Fowler, R. L. (1987). A general method for comparing effect-magnitudes in ANOVA designs. Educational and Psychological Measurement, 47, 361–367. Fowler, R. L. (1987). A general method for comparing effect-magnitudes in ANOVA designs. Educational and Psychological Measurement, 47, 361–367.
go back to reference Fritz, C. O., Morris, P. E., & Richler, J. J. (2012). Effect size estimates: Current use, calculations, and interpretation. Journal of Experimental Psychology: General, 141(1), 2–18. Fritz, C. O., Morris, P. E., & Richler, J. J. (2012). Effect size estimates: Current use, calculations, and interpretation. Journal of Experimental Psychology: General, 141(1), 2–18.
go back to reference Gatsonis, C. & Sampson, A. R. (1989). Multiple correlation: Exact power and sample size calculations. Psychological Bulletin, 106(3), 519–524. Gatsonis, C. & Sampson, A. R. (1989). Multiple correlation: Exact power and sample size calculations. Psychological Bulletin, 106(3), 519–524.
go back to reference Gigerenzer, G. (1993). The superego, the ego, and the ID in statistical reasoning. In G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioural sciences. Methodological Issues (pp. 311–339). Hillsdale: Erlbaum. Gigerenzer, G. (1993). The superego, the ego, and the ID in statistical reasoning. In G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioural sciences. Methodological Issues (pp. 311–339). Hillsdale: Erlbaum.
go back to reference Gillett, R. (1994). The average power criterion for sample size estimation. Statistician, 43, 389–394. Gillett, R. (1994). The average power criterion for sample size estimation. Statistician, 43, 389–394.
go back to reference Gillett, R. (2003). The metric comparability of meta-analytic effectsize estimators from factorial designs. Psychological Methods, 8(4), 419–433. Gillett, R. (2003). The metric comparability of meta-analytic effectsize estimators from factorial designs. Psychological Methods, 8(4), 419–433.
go back to reference Glass, G. V. (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5(10), 3–8. Glass, G. V. (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5(10), 3–8.
go back to reference Grissom, R. J. & Kim, J. J. (2011). Effect sizes for research: Univariate and multivariate applications (2nd ed.). Milton Park: Routledge. Grissom, R. J. & Kim, J. J. (2011). Effect sizes for research: Univariate and multivariate applications (2nd ed.). Milton Park: Routledge.
go back to reference Haddock, C. K., Rindskopf, D., & Shadish, W. R. (1998). Using odds ratios as effect sizes for meta-analysis of dichotomous data. A primer on methods and issues. Psychological Methods, 3(3), 339–353. Haddock, C. K., Rindskopf, D., & Shadish, W. R. (1998). Using odds ratios as effect sizes for meta-analysis of dichotomous data. A primer on methods and issues. Psychological Methods, 3(3), 339–353.
go back to reference Hager, W. (2004). Testplanung zur statistischen Prüfung psychologischer Hypothesen. Göttingen: Hogrefe. Hager, W. (2004). Testplanung zur statistischen Prüfung psychologischer Hypothesen. Göttingen: Hogrefe.
go back to reference Halpern, S. D., Karlawish, J. H. T., & Berlin, J. A. (2002). The continuing unethical conduct of underpowered clinical trials. Journal of the American Medical Association, 288(3), 358–362. Halpern, S. D., Karlawish, J. H. T., & Berlin, J. A. (2002). The continuing unethical conduct of underpowered clinical trials. Journal of the American Medical Association, 288(3), 358–362.
go back to reference Hays, W. L. (1994). Statistics (5th ed.). New York: Harcourt College Publishers. Hays, W. L. (1994). Statistics (5th ed.). New York: Harcourt College Publishers.
go back to reference Hedges, L. V. (1982). Estimation of effect size from a series of independent experiments. Psychological Bulletin, 92(2), 490–499. Hedges, L. V. (1982). Estimation of effect size from a series of independent experiments. Psychological Bulletin, 92(2), 490–499.
go back to reference Held, U. (2010). Was ist eine „Odds Ratio“ und wann wird sie verwendet? Schweiz Med Forum, 10(37), 634–645. Held, U. (2010). Was ist eine „Odds Ratio“ und wann wird sie verwendet? Schweiz Med Forum, 10(37), 634–645.
go back to reference Hoenig, J. M. & Heisey, D. M. (2001). The abuse of power: The pervasive fallacy of power calculations for data analysis. The American Statistician, 55(1), 19–24. Hoenig, J. M. & Heisey, D. M. (2001). The abuse of power: The pervasive fallacy of power calculations for data analysis. The American Statistician, 55(1), 19–24.
go back to reference Hsu, L. M. (2004). Biases of success rate differences shown in binomial effect size displays. Psychological Methods, 9(2), 183–197. Hsu, L. M. (2004). Biases of success rate differences shown in binomial effect size displays. Psychological Methods, 9(2), 183–197.
go back to reference James, D. & Drakich, J. (1993). Understanding gender differences in amount of talk. In D. Tannen (Ed.), Gender and conversational interaction (pp. 281–312). Oxford: Oxford University Press. James, D. & Drakich, J. (1993). Understanding gender differences in amount of talk. In D. Tannen (Ed.), Gender and conversational interaction (pp. 281–312). Oxford: Oxford University Press.
go back to reference Janosky, J. E. (2002). The ethics of underpowered clinical trials. Journal of the American Medical Association, 288(17), 2118. Janosky, J. E. (2002). The ethics of underpowered clinical trials. Journal of the American Medical Association, 288(17), 2118.
go back to reference Kelley, K. & Maxwell, S. E. (2003). Sample Size for multiple regression. Obtaining regression coefficients that are accurate, not simply significant. Psychological Methods, 8(3), 305–321. Kelley, K. & Maxwell, S. E. (2003). Sample Size for multiple regression. Obtaining regression coefficients that are accurate, not simply significant. Psychological Methods, 8(3), 305–321.
go back to reference Keren, G. & Lewis, C. (1979). Partial omega squared for ANOVA designs. Educational and Psychological Measurement, 39, 119–128. Keren, G. & Lewis, C. (1979). Partial omega squared for ANOVA designs. Educational and Psychological Measurement, 39, 119–128.
go back to reference Kirk, R. E. (1996). Practical significance: A concept whose time has come. Educational and Psychological Measurement, 56(5), 746–759. Kirk, R. E. (1996). Practical significance: A concept whose time has come. Educational and Psychological Measurement, 56(5), 746–759.
go back to reference Kline, R. B. (2004). Beyond significance testing. Washington: American Psychological Association. Kline, R. B. (2004). Beyond significance testing. Washington: American Psychological Association.
go back to reference Kraemer, H. C. & Thiemann, S. (1987). How many subjects? Statistical power analysis in research. Thousand Oaks: Sage. Kraemer, H. C. & Thiemann, S. (1987). How many subjects? Statistical power analysis in research. Thousand Oaks: Sage.
go back to reference Kshirsagar, A. M. (1972). Multivariate analysis. New York: Dekker. Kshirsagar, A. M. (1972). Multivariate analysis. New York: Dekker.
go back to reference Küchler, M. (1980). The analysis of nonmetric data. Sociological Methods and Research, 8(4), 369–388. Küchler, M. (1980). The analysis of nonmetric data. Sociological Methods and Research, 8(4), 369–388.
go back to reference Leonhart, R. (2009). Statistik – Einstieg und Vertiefung (2. Aufl.). Bern: Huber. Leonhart, R. (2009). Statistik – Einstieg und Vertiefung (2. Aufl.). Bern: Huber.
go back to reference Lilford, R., & Stevens, A. J. (2002). Underpowered studies. British Journal of Surgery, 89(2), 129–131. Lilford, R., & Stevens, A. J. (2002). Underpowered studies. British Journal of Surgery, 89(2), 129–131.
go back to reference Lipsey, M. W. (1997). Design sensitivity: Statistical power for applied experimental research. In L. Bickman & D. Rog (Eds.), Handbook of applied social research methods (pp. 39–68). Thousand Oaks: Sage. Lipsey, M. W. (1997). Design sensitivity: Statistical power for applied experimental research. In L. Bickman & D. Rog (Eds.), Handbook of applied social research methods (pp. 39–68). Thousand Oaks: Sage.
go back to reference Lipsey, M. W. & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks: Sage. Lipsey, M. W. & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks: Sage.
go back to reference Maxwell, S. E. (2000). Sample size and multiple regression analysis. Psychological Methods, 5(4), 434–458. Maxwell, S. E. (2000). Sample size and multiple regression analysis. Psychological Methods, 5(4), 434–458.
go back to reference Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: Causes, consequences, and remedies. Psychological Methods, 9(2), 147–163. Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: Causes, consequences, and remedies. Psychological Methods, 9(2), 147–163.
go back to reference Mehl, M. R., Vazire, S., Ramirez–Esparza, N., Slatcher, R. B., & Pennebaker, J. W. (2007). Are women really more talkative than men? Science, 317(5834), 82. Mehl, M. R., Vazire, S., Ramirez–Esparza, N., Slatcher, R. B., & Pennebaker, J. W. (2007). Are women really more talkative than men? Science, 317(5834), 82.
go back to reference Mendoza, J. L. & Stafford, K. L. (2001). Confidence Intervals, power calculation, and sample size estimation for the squared multiple correlation coefficient under the fixed and random regression models: A computer program and useful standard tables. Educational Psychological Measurement, 61(4), 650–667. Mendoza, J. L. & Stafford, K. L. (2001). Confidence Intervals, power calculation, and sample size estimation for the squared multiple correlation coefficient under the fixed and random regression models: A computer program and useful standard tables. Educational Psychological Measurement, 61(4), 650–667.
go back to reference Murphy, K. R., Myors, B., & Wolach, A. (2008). Statistical power analysis: 2. Milton Park: Routledge. Murphy, K. R., Myors, B., & Wolach, A. (2008). Statistical power analysis: 2. Milton Park: Routledge.
go back to reference Nickerson, R. S. (2000). Null hypothesis significance testing: A review of an old and continuing controversy. Psychological Methods, 5(2), 241–301. Nickerson, R. S. (2000). Null hypothesis significance testing: A review of an old and continuing controversy. Psychological Methods, 5(2), 241–301.
go back to reference Olejnik, S. & Algina, J. (2003). Generalized eta and omega squared statistics: Measures of effect size for some common research designs. Psychological Methods, 8(4), 434–477. Olejnik, S. & Algina, J. (2003). Generalized eta and omega squared statistics: Measures of effect size for some common research designs. Psychological Methods, 8(4), 434–477.
go back to reference Peng, C.-Y. J., Long, H., & Abaci, S. (2012). Power analysis software for educational researchers. Journal of Experimental Education, 80(2), 113–136. Peng, C.-Y. J., Long, H., & Abaci, S. (2012). Power analysis software for educational researchers. Journal of Experimental Education, 80(2), 113–136.
go back to reference Reedera, H. M. (1996). A critical look at gender difference in communication research. Communication Studies, 47(4), 318–330. Reedera, H. M. (1996). A critical look at gender difference in communication research. Communication Studies, 47(4), 318–330.
go back to reference Rosenthal, M. C. (1994). The fugitive literature. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 85–94). Thousand Oaks: Sage. Rosenthal, M. C. (1994). The fugitive literature. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 85–94). Thousand Oaks: Sage.
go back to reference Rosenthal, R. & Rubin, D. B. (1982). A simple, general purpose display of magnitudes of experimental effect. Journal of Educational Psychology, 74(2), 166–169. Rosenthal, R. & Rubin, D. B. (1982). A simple, general purpose display of magnitudes of experimental effect. Journal of Educational Psychology, 74(2), 166–169.
go back to reference Sachs, L. (2002). Statistische Auswertungsmethoden (10. Aufl.). Berlin: Springer. Sachs, L. (2002). Statistische Auswertungsmethoden (10. Aufl.). Berlin: Springer.
go back to reference Sedlmeier, P. & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies? Psychological Bulletin, 105(2), 309–316. Sedlmeier, P. & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies? Psychological Bulletin, 105(2), 309–316.
go back to reference Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.
go back to reference Smithson, M. J. (2003). Confidence intervals. Thousand Oaks: Sage. Smithson, M. J. (2003). Confidence intervals. Thousand Oaks: Sage.
go back to reference Steiger, J. H. (2004). Beyond the F-Test: Effect size confidence intervals and tests of close fit in the analysis of variance and contrast analysis. Psychological Methods, 9(2), 164–182. Steiger, J. H. (2004). Beyond the F-Test: Effect size confidence intervals and tests of close fit in the analysis of variance and contrast analysis. Psychological Methods, 9(2), 164–182.
go back to reference Thompson, B. (1994). Guidelines for authors. Educational and Psychological Measurement, 54, 837–847. Thompson, B. (1994). Guidelines for authors. Educational and Psychological Measurement, 54, 837–847.
go back to reference Thompson, B. (2002). „Statistical“, „practical“, and „clinical“: How many kinds of significance do counselors need to consider? Journal of Counseling and Development, 80(1), 64–71. Thompson, B. (2002). „Statistical“, „practical“, and „clinical“: How many kinds of significance do counselors need to consider? Journal of Counseling and Development, 80(1), 64–71.
go back to reference Westermann, R. (2000). Wissenschaftstheorie und Experimentalmethodik. Ein Lehrbuch zur Psychologischen Methodenlehre. Göttingen: Hogrefe. Westermann, R. (2000). Wissenschaftstheorie und Experimentalmethodik. Ein Lehrbuch zur Psychologischen Methodenlehre. Göttingen: Hogrefe.
go back to reference Wilkinson, L. & Inference, T. T. F. o. S. (1999). Statistical methods in psychological journals: Guidelines and explanations. American Psychologist, 54(8), 594–604. Wilkinson, L. & Inference, T. T. F. o. S. (1999). Statistical methods in psychological journals: Guidelines and explanations. American Psychologist, 54(8), 594–604.
go back to reference Winer, B. J., Brown, D. R., & Michels, K. M. (1991). Statistical principles in experimental design. New York: Mc–Graw Hill. Winer, B. J., Brown, D. R., & Michels, K. M. (1991). Statistical principles in experimental design. New York: Mc–Graw Hill.
Metadata
Title
Bestimmung von Teststärke, Effektgröße und optimalem Stichprobenumfang
Authors
Nicola Döring
Jürgen Bortz
Copyright Year
2016
Publisher
Springer Berlin Heidelberg
DOI
https://doi.org/10.1007/978-3-642-41089-5_14