Skip to main content

2016 | OriginalPaper | Buchkapitel

16. Metaanalyse

verfasst von : Nicola Döring, Jürgen Bortz

Erschienen in: Forschungsmethoden und Evaluation in den Sozial- und Humanwissenschaften

Verlag: Springer Berlin Heidelberg

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Zusammenfassung

Dieses Kapitel vermittelt folgende Lernziele: Ziele und Anwendungsmöglichkeiten der Metaanalyse kennen. Die Metaanalyse von narrativen Reviews abgrenzen können. Vorliegende Metaanalysen kritisch rezipieren können. Die Schritte bei der Durchführung einer quantitativen Metaanalyse erläutern können. Mögliche Fehler bei der Durchführung einer Metaanalyse sowie entsprechende Gegenmaßnahmen erklären können.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
  1. Anderson, C. A., Shibuya, A., Ihori, N., Swing, E. L., Bushman, B. J., Sakamoto, A., et al. (2010). Violent video game effects on aggression, empathy, and prosocial behavior in eastern and western countries: A meta-analytic review. Psychological Bulletin, 136(2), 151–173.
  2. Bangert-Drowns, R. L. (1986). Review of development in meta-analytic method. Psychological Bulletin, 99(3), 388–399.
  3. Bax, L., Yu, L. M., Ikeda, N., Tsuruta, H., & Moons, K. G. (2006). Development and validation of MIX: Comprehensive free software for meta-analysis of casual research data. BMC Medial Research Methodology, 6. Retrieved [November 6, 2013], from http://​www.​biomedcentral.​com/​14712288/​6/​50/​
  4. Beaman, A. L. (1991). An empirical comparison of meta-analytic and traditional reviews. Personality and Social Psychology Bulletin, 17(3), 252–257.
  5. Becker, B. J. (1987). Applying tests of combined significance in meta-analysis. Psychological Bulletin, 102(1), 164–172.
  6. Becker, B. J. (1994). Combining significance levels. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 215–230). Thousand Oaks: Sage.
  7. Beelmann, A. & Bliesener, T. (1994). Aktuelle Probleme und Strategien der Metaanalyse. Psychologische Rundschau, 45, 211–233.
  8. Beelmann, A. & Lipsey, M. W. (in press). Meta-analysis of effect eestimates from multiple studies. In M. W. Lipsey & D. S. Cordray (Eds.), Field experimentation: Methods for evaluating what works, for whom, under what circumstances, how, and why. Thousand Oaks: Sage.
  9. Beelmann, A. & Schneider, N. (2003). Wirksamkeit der Psychotherapie bei Kindern und Jugendlichen. Eine Übersicht und Meta-Analyse zum Stand und zu Ergebnissen der deutschsprachigen Effektivitätsforschung. Zeitschrift für Klinische Psychologie und Psychotherapie, 32(2), 129–143.
  10. Bond, C. F. J., Wiitala, W. L., & Dan Richard, F. (2003). Meta-analysis of raw mean differences. Psychological Methods, 8(4), 406–418.
  11. Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2005). Comprehensive meta-analysis [Computer Program] (Version 2). Englewood Biostat.
  12. Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. Chichester: Wiley.
  13. Bortz, J. (2005). Statistik (6. Aufl.). Berlin: Springer.
  14. Bortz, J. & Lienert , G. A. (2003). Kurzgefaßte Statistik für die klinische Forschung. Ein praktischer Leitfaden für die Analyse kleiner Stichproben (2. Aufl.). Heidelberg: Springer.
  15. Bortz, J. & Lienert , G. A. (2008). Kurzgefasste Statistik für die klinische Forschung. Leitfaden für die verteilungsfreie Analyse kleiner Stichproben. (3. Aufl.). Berlin: Springer.
  16. Bortz, J., Lienert , G. A., & Boehnke, K. (2008). Verteilungsfreie Methoden in der Biostatistik (3. Aufl.). Berlin: Springer.
  17. Bortz, J. & Schuster, C. (2010). Statistik für Human- und Sozialwissenschaftler. Berlin: Springer.
  18. Bosnjak, M. & Viechtbauer, W. (2009). Die Methode der Meta-Analyse zur Evidenzbasierung von Gesundheitsrisiken: Beiträge der Sozial-, Verhaltens- und Wirtschaftswissenschaften. Zentralblatt für Abeitsmedizin, Abeitsschutz & Ergonomie, 11, 322–333.
  19. Brüderl, J. (2004). Meta-Analyse in der Soziologie: Bilanz der deutschen Scheidungsursachenforschung oder „statistischer Fruchtsalat“? Zeitschrift für Soziologie, 33(1), 84–86.
  20. Bushman, B. J. (1994). Vote Counting Procedures in Meta-Analysis. In H. Cooper & L. V. Hedges (Eds.), The Handbook of Research Synthesis (pp. 193–213). Thousand Oaks: Sage.
  21. Cafri, G., Kromrey, J. D., & Brannick, M. T. (2010). A Meta-Meta-Analysis: Empirical review of statistical power, type I error rates, effect sizes, and model selection of meta-analyses published in psychology. Multivariate Behavioral Research, 45(2), 239–270.
  22. Carson, K. P., Schriesheim, C. A., & Kinicki, A. J. (1990). The Usefullness of the „Fail-Safe’“ statistic in meta-analysis. Educational and Psychological Measurement 50(2), 233–243.
  23. Cohen, J. (1988). Statistical power analysis for the behavioral sciences. New York: Erlbaum.
  24. Cohn, L. D. & Becker, B. J. (2003). How meta-analysis increases statistical power. Psychological Methods, 8(3), 243–253.
  25. Cook, D. J., Guyatt, G. H., Ryan, G., Clifton, J., Buckingham, L., Willan, A., et al. (1993). Should unpublished data be included in meta-analyses? Current convictions and controversies. JAMA The Journal of the American Medical Association, 269(21), 2749–2753.
  26. Cooper, H., Charlton, K., Valentine, J. C., & Muhlenbruck, L. (2000). Making the most of summer school: A meta-analytic and narrative review (Vol. 65). United Kingdom: Blackwell Publishing.
  27. Cooper, H., De Neve, K., & Charlton, K. (1997). Finding the missing science. The fate of studies submitted for review by a human subjects committee. Psychological Methods, 2(4), 447–452.
  28. Cooper, H. M. (2009). Research synthesis and meta-analysis: A step-by-step approach. Thousand Oaks: Sage.
  29. Cooper, H. M. & Hedges, L. V. (Eds.). (1993). The handbook of research synthesis. New York: Russell Sage Foundation.
  30. Cooper, H. M., Hedges, L. V., & Valentine, J. (Eds.). (2009). The handbook of research synthesis and meta-analysis (2nd ed.). New York: Russell Sage.
  31. Cornwell, J. M. & Ladd, R. T. (1993). Power and accuracy of the Schmidt and Hunter meta-analytic procedures. Educational and Psychological Measurement, 53(4), 877–895.
  32. Czienskowski, U. (2003). Meta-analysis – not just research synthesis. In R. Schulze, H. Holling & D. Böhning (Eds.), Metaanalysis. New developments and aplications in medical and social science (pp. 141–152). Göttingen: Hogrefe & Huber.
  33. Darlington, R. B. & Hayes, A. F. (2000). Combining Independent p values: Extensions of the Stouffer and Binomial Methods. Psychological Methods, 5, 496-515.
  34. Duval, S. & Tweedie, R. (2000a). A nonparametric „trim and fill“ method of accounting for publication bias in meta-analysis. Journal of the American Statistical Association, 95(449), 89–98.
  35. Duval, S. & Tweedie, R. (2000b). Trim and fill: A simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics, 56(2), 455–463.
  36. Egger, M., Smith, G. D., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. The British Medical Journal, 315(7109), 629–634.
  37. Ellis, P. D. (2010). The essential guide to effect sizes. Statistical power, meta-analysis, and the interpretation of research results. Cambridge: University Press.
  38. Eysenck, H. J. (1952). The effects of psychotherapy: An Evaluation. Journal of Consulting Psychology, 16(5), 319–324.
  39. Eysenck, H. J. (1978). An Exercise in Mega-Silliness. American Psychologist, 33, 517.
  40. Ferguson, C. J. & Brannick, M. T. (2012). Publication bias in psychological science: Prevalence, methods for identifying and controlling, and implications for the use of meta-analyses. Psychological Methods, 17(1), 120–128.
  41. Ferguson, C. J. & Kilburn, J. (2010). Much ado about nothing: The misestimation and overinterpretation of violent video game effects in eastern and western nations: Comment on Anderson et al. Psychological Bulletin, 136(2), 174–178.
  42. Fischer, P., Krueger, J. I., Greitemeyer, T., Vogrincic, C., Kastenmüller, A., Frey, D., et al. (2011). The bystander-effect: A meta-analytic review on bystander intervention in dangerous and non-dangerous emergencies. Psychological Bulletin, 137(4), 517–537.
  43. Fricke, R. & Treinies, G. (1985). Einführung in die Metaanalyse. Bern: Huber.
  44. Gillett, R. (2003). The metric comparability of meta-analytic effectsize estimators from factorial designs. Psychological Methods, 8(4), 419–433.
  45. Gilpin, A. R. (1993). Table for conversion of Kendall’s tau to Spearman’s rho within the context of measures of magnitude effect for meta-analysis. Educational and Psychological Measurement, 53(1), 87–92.
  46. Glass, G. V. (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5(10), 3–8.
  47. Glass, G. V. (1999). Meta-analysis at 25. Paper presented at the Office of Special Education Programs Research Project Directors’ Conference, U.S. Department of Education. Retrieved November 6, 2013, from http://​www.​gvglass.​info/​papers/​meta25.​html
  48. Glass, G. V., McGraw, B., & Smith, M. L. (1981). Meta analysis in social research. Thousand Oaks: Sage.
  49. Gleser, L. J. & Olkin, J. (1994). Stochastically dependent effect sizes. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 339–355). Thousand Oaks: Sage.
  50. Gottfredson, S. D. (1978). Evaluating psychological research reports. Dimensions, reliability, and correlates of quality judgements. American Psychologists, 33(10), 920–934.
  51. Green, B. F. & Hall, J. A. (1984). Quantitative methods for literature review’s. Annual Revue of Psychology, 35, 37–53.
  52. Grégoire, G., Derderian, F., & LeLorie, J. (1995). Selecting the language of the publications included in meta,analysis: is there a tower-of-babel-bias? Journal of Clinical Epidemiology, 48(1), 159–163.
  53. Hadjar, A. (2011). Geschlechtsspezifische Bildungsungleichheiten. Wiesbaden: VS.
  54. Hadjar, A. & Lupatsch, J. (2011). Geschlechterunterschiede im Schulerfolg: Spielt die Lehrperson eine Rolle? ZSE Zeitschrift für Soziologie der Erziehung und Sozialisation, 31(1), 79–94.
  55. Hager, W. (2004). Testplanung zur statistischen Prüfung psychologischer Hypothesen. Göttingen: Hogrefe.
  56. Hall, J. A., Tickle-Degnerz, L., Rosenthal, R., & Mosteller, F. (1994). Hypothesis and problems in research synthesis. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 18–27). Thousand Oaks: Sage.
  57. Hannover, B. & Kessels, U. (2011). Sind Jungen die neuen Bildungsverlierer? Empirische Evidenz für Geschlechterdisparitäten zuungunsten von Jungen und Erklärungsansätze. Zeitschrift für Pädagogische Psychologie, 25(2), 89–103.
  58. Harris, M. J. (1991). Controversy and cumulation: Meta-analysis and research on interpersonal expectancy effects. Personality and Social Psychology Bulletin, 17(3), 316–322.
  59. Hattie, J. A. (2008). Visible learning. A synthesis of over 800 meta-analyses relating to achievement. London: Routledge.
  60. Hedges, L. V. (1982). Statistical methodology in meta-analysis. Princeton: Educational Testing Service.
  61. Hedges, L. V. (1994). Fixed effects models. In H. Cooper & L. V. Hedges (Eds.), The Handbook of Research Synthesis (pp. 286–298). Thousand Oaks: Sage.
  62. Hedges, L. V., Cooper, H., & Bushman, B. J. (1992). Testing the null hypothesis in meta-analysis: A comparison of combined probability and confidence interval procedures. Psychological Bulletin, 111(1), 188–194.
  63. Hedges, L. V. & Olkin, J. (1985). Statistical methods for meta-analysis. Orlando: Academic Press.
  64. Hedges, L. V. & Pigott, T. D. (2001). The power of statistical tests in meta-analysis. Psychological Methods, 6(3), 203–217.
  65. Hedges, L. V. & Pigott, T. D. (2004). The power of statistical tests for moderators in meta-analysis. Psychological Methods, 9(4), 426–445.
  66. Hedges, L. V. & Vevea, J. L. (1996). Estimating effect size under publication bias: Small sample properties and robustness of a random effects selection model. Journal of Educational and Behavioral Statistics, 21(4), 299–332.
  67. Hedges, L. V. & Vevea, J. L. (1998). Fixed- and random-effects models in meta-analysis. Psychological Methods, 3(4), 486–504.
  68. Heirs, M. & Dean, M. E. (2007). Homeopathy for attention deficit/hyperactivity disorder or hyperkinetic disorder. Cochrane Database of Systematic Reviews, 4.
  69. Heres, S., Wagenpfeil, S., Hamann, J., Kissling, W., & Leucht, S. (2004). Language bias in neuroscience – is the tower of babel located in Germany? European Psychiatry, 19(4), 230–232.
  70. Heyvaert, M., Maes, B., & Onghena, P. (2011). Mixed methods research synthesis: definition, framework, and potential. Quality & Quantity(online first), 1–18.
  71. Higgins, J. P. T. & Green, S. (2009). Cochrane handbook for systematic reviews of interventions version 5.0.2. Retrieved November 6, 2013, from http://​www.​cochranehandbook​.​org
  72. Hsu, L. M. (2005). Some properties ofrequivalent: a simple effect size indicator. Psychological Methods, 10(4), 420–427.
  73. Hunter, J. E. & Schmidt, F. L. (1989). Methods of meta-analysis: Correcting error and bias in research findings. Thousand Oaks: Sage.
  74. Hunter, J. E. & Schmidt, F. L. (2004). Methods of Meta-Analysis: Correcting Error and Bias in Research Findings (2nd ed.). Thousand Oaks: Sage.
  75. Hunter, J. E., Schmidt, F. L., & Jackson, G. B. (1982). Meta-analysis cumulating research finding across studies. Thousand Oaks: Sage.
  76. Jackson, D., Riley, R., & White, I. R. (2011). Multivariate meta-analysis: Potential and promise. Statistics in Medicine, 30(20), 2481–2498.
  77. Johnson, B. T., Mullen, B., & Salas, E. (1995). Comparison of three major meta-analytic approaches. Journal of Applied Psychology, 80(1), 94–106.
  78. Jussim, L. & Harber, K. D. (2005). Teacher expectations and self-fulfilling prophecies: Knowns and unknowns, resolved and unresolved controversies. Personality and Social Psychology Review, 9(2), 131–155.
  79. Jussim, L., Robustelli, S. L., & Cain, T. R. (2009). Teacher expectations and self-fulfilling prophecies. In K. R. Wenzel & A. Wigfield (Eds.), Handbook of motivation at school. (Educational Psychology Handbook Series) (pp. 349–380). New York: Routledge/Taylor & Francis Group.
  80. Kirk, R. E. (1996). Practical significance: A concept whose time has come. Educational Psychological Measurement, 56(5), 746–759.
  81. Kontopantelis, E. & Reeves, D. (2009). MetaEasy: A meta-analysis add-in for Microsoft Excel. Journal of Statistical Software, 30(7), 1–25.
  82. Kraemer, H. C. (1983). Theory of estimation and testing of effect sizes: Use in meta-analysis. Journal of Educational Statistics, 8(2), 93–101.
  83. Kraemer, H. C. (1985). A strategy to teach the concept and application of power of statistical tests. Journal of Educational Statistics, 10(3), 173–195.
  84. Kraemer, H. C. (2005). A simple effect size indicator for two-group comparisons? A comment on r equivalent. Psychological Methods, 10(4), 413–419.
  85. Kraemer, H. C., Gardner, C., Brooks III, J. O., & Yesavage, J. A. (1998). Advantages of excluding underpowered studies in meta-analysis: Inclusionist vs. exclusionist view points. Psychological Methods, 3(1), 23–31.
  86. Kraemer, H. C. & Thiemann, S. (1987). How many subjects? Statistical power analysis in research. Thousand Oaks: Sage.
  87. Kunz, R., Khan, K. S., Kleijnen, J., & Antes, G. (2009). Systematische Übersichtsarbeiten und Meta-Analysen (2. Aufl.). Bern: Huber.
  88. Landman, J. R. & Dawes, R. M. (1982). Psychotherapy outcome: Smith and Glass’ Conclusions stand up under scrutinity. American Psychologist, 37(5), 504–516.
  89. Light, R. J. & Pillemer, D. B. (1984). Summing up: The science of reviewing research. Cambridge: Harvard University Press.
  90. Light, R. J. & Smith, P. V. (1971). Accumulating evidence: Procedure for resolving contradictions among different research studies. Harvard Educational Review, 41(4), 429–471.
  91. Lipsey, M. W. & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks: Sage.
  92. Littell, J., Corcoran, J., & Pillai, V. (2008). Systematic reviews and meta-analysis. New York: Oxford University Press.
  93. Lösel, F. & Breuer-Kreuzer, D. (1990). Metaanalyse in der Evaluationsforschung: Allgemeine Probleme und eine Studie über den Zusammenhang zwischen Familienmerkmalen und psychischen Auffälligkeiten bei Kindern und Jugendlichen. Zeitschrift für Pädagogische Psychologie, 4, 253–268.
  94. MacKay, D. G. (1993). The theoretical epistemology: A new perspective on some long-standing methodological issues in psychology. In G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioral sciences. Methodological issues (pp. 229–255). Hillsdale: Erlbaum.
  95. Magnusson, D. (1966). Test theory. Reading: Addison-Wesley.
  96. Manfreda, K. L., Bosnjak, M., Berzelak, J., Haas, I., Vehovar, V., & Berzelak, N. (2008). Web surveys vs. other survey modes. A meta-analysis comparing response rates. Source International Journal of Market Research, 50(1), 79–104.
  97. Mansfield, R. S. & Busse, T. V. (1977). Meta-analysis of research: A rejoinder to Glass. Educational Researcher, 6(9), 3.
  98. Morris, S. B. & De Shon, R. P. (2002). Combining effect size estimates in meta-analysis with repeated measure and independent groups designs. Psychological Methods, 7(1), 105–125.
  99. Mullen, B. (1989). Advanced basic meta-analysis. Hillsdale: Erlbaum.
  100. Mullen, B. & Rosenthal, R. (1985). BASIC meta-analysis: Procedures and program. Hillsdale: Erlbaum.
  101. Neugebauer, M., Helbig, M., & Landmann, A. (2011). Unmasking the myth of the same-sex teacher advantage. European Sociological Review, 27(5), 669–689.
  102. Olejnik, S. & Algina, J. (2000). Measures of effect size for comparative studies: Applications, interpretations and limitations. Contemporary Educational Psychology, 25(3), 241–286.
  103. Orwin, R. G. (1983). A fail-safe N for effect size in meta-analysis. Journal of Educational Statistics, 8(2), 157–159.
  104. Orwin, R. G. (1994). Evaluating coding decisions. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 139–162). Thousand Oaks: Sage.
  105. Paterson, B., Dubouloz, C.-J., Chevrier, J., Ashe, B., King, J., & Moldoveanu, M. (2009). Conducting qualitative metasynthesis research: Insights from a metasynthesis project. International Journal of Qualitative Methods, 8(3), 22–33.
  106. Petersen, J. L. & Hyde, J. S. (2010). A meta-analytic review of research on gender differences in sexuality. Psychological Bulletin, 136(1), 21–38.
  107. Petticrew, M. & Roberts, H. (Eds.). (2006). Systematic reviews in the social sciences: A practical guide. Oxford: Blackwell.
  108. Pigott, T. D. (1994). Methods for handling missing data in research synthesis. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 164–174). Thousand Oaks: Sage.
  109. Popay, J. (2006). Moving beyond effectiveness in evidence synthesis: Methodological issues in the synthesis of diverse sources of evidence. London: NICE.
  110. Pope, C., Mays, N., & Popay, J. (2007). Synthesising qualitative and quantitative health research: A guide to methods maidenhead: Open University Press.
  111. Radin, D. I. & Ferrari, D. C. (1991). Effects of consciousness on the fall of dice: a meta-analysis. Journal of Scientific Exploration, 5(3), 61–83.
  112. Rosenberg, M. S. (2005). The file-drawer problem revisited: A general weighted method for calculating fail-safe numbers in meta-analysis. Evolution, 59(2), 464–468.
  113. Rosenberg, M. S., Adams, D. C., & Gurevitch, J. (2007). MetaWin [Computer Program] (Version 2.0): self-distributed.
  114. Rosenthal, M. C. (1994). The fugitive literature. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 85–94). Thousand Oaks: Sage.
  115. Rosenthal, R. (1968). Experimenter expectancy and the reassuring natur of the null hypothesis decision procedure. Psychological Bulletin, 70(6), 30–47.
  116. Rosenthal, R. (1969). Interpersonal expectations. In R. Rosenthal & R. L. Rosnow (Hrsg.), Artifact in behavioral research (S. 181–277). Orlando: Academic Press.
  117. Rosenthal, R. (1973a). the mediation of pygmalion effects: A four factor „ theory“. Papau New Guinea Journal of Education, 9(1), 1–12.
  118. Rosenthal, R. (1973b). On the social psychology of the self-fulfilling prophecy: Further evidence for pygmalion effects and their mediating mechanisms. MSS Modular Publications, Module 53, 1–28.
  119. Rosenthal, R. (1976). Experimenter effects in behavioral research. New York: Halsted Press.
  120. Rosenthal, R. (1978). Combining results of independent studies. Psychological Bulletin, 85(1), 185–193.
  121. Rosenthal, R. (1979). The „file drawer problem’’ and tolerance for null results. Psychological Bulletin, 86(3), 638–641.
  122. Rosenthal, R. (1984). Meta-analytic procedures for social research. Thousand Oaks: Sage.
  123. Rosenthal, R. (1993). Cumulating evidence. In G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioural sciences. Methodological issues (pp. 519–559). Hillsdale: Erlbaum.
  124. Rosenthal, R. (1994). Parametric measures of effect size. In H. Cooper & L. V. Hedges (Eds.), The Handbook of Research Synthesis (S. 232–243). Thousand Oaks: Sage.
  125. Rosenthal, R. (1995). Critiquing Pygmalion: A 25-year perspective. Current Directions in Psychological Science, 4(6), 171–172.
  126. Rosenthal, R. & DiMatteo, M. R. (2001). Meta analysis: Recent developments in quantitative methods for literature reviews. Annual Review of Psychology, 52, 59–82.
  127. Rosenthal, R. & Fode, K. (1963). The effect of experimenter bias on performance of the albino rat. Behavioral Science, 8(3), 183–189.
  128. Rosenthal, R. & Jacobson, L. (1968). Pygmalion in the classroom: Teacher expectation and pupils’ intellectual development. New York: Holt, Rinehart & Winston.
  129. Rosenthal, R. & Rubin, D. B. (1971). Pygmalion Reaffirmed. In J. D. Elashoff & R. E. Snow (Eds.), Pygmalion reconsidered (pp. 139–155). Worthington: Jones.
  130. Rosenthal, R. & Rubin, D. B. (1978). Interpersonal expectancy effects: The first 345 studies. Behavioral and Brain Sciences, 1(3), 377–386.
  131. Rosenthal, R. & Rubin, D. B. (1986). Meta-analytic procedures for combining studies with multiple effect sizes. Psychological Bulletin, 99(3), 400–406.
  132. Rosenthal, R. & Rubin, D. B. (2003).requivalent: A simple effect size indicator. Psychological Methods, 8(4), 492–496.
  133. Rossi, J. S. (1997). A case study in the failure of psychology as a cumulative science: The spontaneous recovery of verbal learning. In L. L. Harlow, S. A. Mulaik & J. H. Steiger (Eds.), What if there were no significance tests? (pp. 175–197). Mahwah: Erlbaum.
  134. Rustenbach, S. J. (2003). Metaanalyse. Eine anwendungsorientierte Einführung. Bern: Huber.
  135. Sackett, P. R., Harris, M. M., & Orr, J. M. (1986). On seeking moderator variables in the meta-analysis of correlational data: A monte carlo investigation of statistical power and resistence to type I error. Journal Applied Psychology, 71(2), 302–310.
  136. Sánchez-Meca, J., Marin-Martinez, F., & Chacón-Moscoso, S. (2003). Effect-size indices for dichotomized outcomes in meta-analysis. Psychological Methods, 8, 448–467.
  137. Sandelowski, M. & Barroso, J. (2006). Handbook for synthesising qualitative research. New York: Springer.
  138. Saner, H. (1994). A conservative inverse normal test procedure for combining p-values in integrative research. Psychometrika, 59(2), 253–267.
  139. Sauerbrei, W. & Blettner, M. (2003). Issues of traditional reviews and meta-analysis of observational studies in medical research. In R. Schulze, H. Holling & D. Böhning (Eds.), Metaanalysis. new developments and applications in medical and social sciences (pp. 79–98). Göttingen: Hogrefe & Huber.
  140. Schulze, R. (2004). Meta-analysis. A comparison of approaches. Göttingen: Hogrefe & Huber.
  141. Schulze, R., Holling, H., Großmann, H., Jütting, A., & Brocke, M. (2003). Differences in the results of two meta-analytical approaches. In R. Schulze, H. Holling & D. Böhning (Eds.), Meta-analysis. New developments and applications in medical and social sciences (pp. 19–39). Göttingen: Hogrefe & Huber.
  142. Seifert, T. L. (1991). Determining effect sizes in various experimental designs. Educational and Psychological Measurement, 51(2), 341–347.
  143. Shadish, W. R. & Haddock, C. K. (1994). Combining estimates of effect size. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 262–280). Thousand Oaks: Sage.
  144. Smith, G. & Egger, M. (1998). Meta-analysis: Unresolved issues and future developments. British Medical Journal, 316(7126), 221–225.
  145. Smith, M. L. & Glass, G. V. (1977). Meta-analysis of psychotherapy outcome studies. American Psychologist, 32(9), 752–760.
  146. Snook, I., O’Neill, J., Clark, J., O’Neill, A.-M., & Openshaw, R. (2009). Invisible learnings? A Commentary on John Hattie’s book – Visible learnings? A synthesis of over 800 meta-analyses relating to achievement. New Zealand Journal of Educational Studies, 44(1), 93–106.
  147. Soilevuo Grønnerød, J. & Grønnerød, C. (2012). The Wartegg Zeichen Test: A literature overview and a meta-analysis of reliability and validity. Psychological Assessment, 24(2), 476–489.
  148. Spector, P. E. & Levine, E. L. (1987). Meta-analysis for integrating study outcomes. A Monte Carlo study of its susceptibility to type I and type II errors. Journal of Applied Psychology, 72(1), 3–9.
  149. Steiner, D. D., Lane, J. M., Dobbins, G. H., Schnur, A., & McConnell, S. (1991). A review of meta-analysis in organizational behavior and human resources management: An empirical assessment. Educational and Psychological Measurement, 51(3), 609–626.
  150. Sterne, J. A. C., Egger, M., & Davey Smith, G. (2001). Investigating and dealing with publication and other biases. In M. Egger, G. Davey Smith & D. Altman (Eds.), Systematic reviews in health care: Meta-analysis in context (2nd. ed., pp. 189–208). London: BMJ Books.
  151. Stock, W. A. (1994). Systematic coding for research synthesis. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 125–138). Thousand Oaks: Sage.
  152. Stock, W. A., Okun, M. A., Haring, M. J., Miller, W., Kinney, C., & Ceur vorst, R. W. (1982). Rigor in data synthesis: A case study of reliability in meta-analysis. Educational Researcher, 11(6), 10–14.
  153. Stouffer, S. A., Suchman, E. A., de Vinney, L. C., Star, S. A., & Williams, R. M. j. (1949). The American soldier: Adjustment during army life (Vol. 1). Princeton: Princeton University Press.
  154. Sutton, A. J. & Abrams, K. R. (2001). Bayesian methods in meta-analysis and evidence synthesis. Statistical Methods in Medical Research, 10(4), 277–303.
  155. The Cochrane Collaboration. (2011). Review manager (RevMan) [Computer program] (Version 5.1). Copenhagen: The Nordic Cochrane Centre.
  156. Timulak, L. (2009). Meta-analysis of qualitative studies: a tool for reviewing qualitative research findings in psychotherapy. Psychotherapy Research, 19(4–5), 591–600.
  157. Tracz, S. M., Elmore, P. B., & Pohlmann, J. T. (1992). Correlational meta-analysis. Independent and nonindependent cases. Educational and Psychological Measurement, 52(4), 879–888.
  158. Utts, J. (1991). Replication and meta-analysis in parapsychology. Statistical Science, 6(4), 363–403.
  159. Vevea, J. L. & Hedges, L. V. (1995). A general linear model for estimating effect size in the presence of publication bias. Psychometrika, 60(3), 419–435.
  160. Vevea, J. L., & Woods, C. M. (2005). Publication bias in research synthesis. Sensitivity analysis using a priori weight functions. Psychological Methods, 10(4), 428–443.
  161. Viechtbauer, W. (2007). Confidence intervals for the amount of heterogeneity in meta-analysis. Statistics in Medicine, 26(1), 37–52.
  162. Wachter, K. M. & Straf, M. L. (Eds.). (1990). The future of meta-analysis. New York: Sage.
  163. Wang, M. C. & Bushman, B. J. (1998). Using the normal quantile plot to explore meta–analytic data sets. Psychological Methods, 3(1), 46–54.
  164. Westermann, R. (2000). Wissenschaftstheorie und Experimentalmethodik. Ein Lehrbuch zur Psychologischen Methodenlehre. Göttingen: Hogrefe.
  165. White, H. D. (1994). Scientific communication and literature retrieval. In H. Cooper & L. V. Hedges (Eds.), The Handbook of research synthesis (pp. 41–55). Thousand Oaks: Sage.
  166. Wilkinson, B. (1951). Statistical consideration in psychological research. Psychological Bulletin, 48(2), 156–158.
  167. Wilson, D. B. & Lipsey, M. W. (2001). The role of method in treatment effectiveness research: Evidence from meta-analysis. Psychological Methods, 6(4), 413–429.
  168. Wolf, F. M. (1987). Meta-analysis: Quantitative methods for research synthesis. Thousand Oaks: Sage.
  169. Wortmann, P. M. (1994). Judging research quality. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (S. 97–109). Thousand Oaks: Sage.
  170. Zhao, S. (1991). Metatheory, metamethod, qualitative meta-analysis: What, why and how? Sociological Perspectives, 34(3), 377–390.
Metadaten
Titel
Metaanalyse
verfasst von
Nicola Döring
Jürgen Bortz
Copyright-Jahr
2016
Verlag
Springer Berlin Heidelberg
DOI
https://doi.org/10.1007/978-3-642-41089-5_16