Skip to main content
Log in

Randomized experiments as the bronze standard

  • Published:
Journal of Experimental Criminology Aims and scope Submit manuscript

Abstract

In this paper, the strengths and weakneses of randomized field experiments are discussed. Although it seems to be common knowledge that random assignment balances experimental and control groups on all confounders, other features of randomized field experiments are somewhat less appreciated. These include the role of random assignment in statistical inference and representations of the mechanisms by which the treatment has its impact. Randomized experiments also have important limitations and are subject to the fidelity with which they are implemented. In the end, randomized field experiments are still the best way to estimate causal effects, but are a considerable distance from perfection.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Angrist, J., Imbens, G. W. & Rubin, D. B. (1996). Identification of causal effects using instrumental variables (with discussion). Journal of the American Statistical Association 91, 441–472.

    Google Scholar 

  • Berk, R. A. (2003). Regression analysis: A constructive critique. Newbury Park: Sage.

    Google Scholar 

  • Berk, R. A. & de Leeuw, J. (1999). An evaluation of California's inmate classification system using a generalized regression discontinuity design. Journal of the American Statistical Association 94, 1045–1052.

    Google Scholar 

  • Berk, R. A. & Freedman, D. A. (2003). Statistical assumptions as empirical commitments. In T. G. Blomberg & S. Cohen (Eds.), Punishment and social control: Essays in honor of Sheldon Messinger (pp. 235–254). 2nd edn. New York: Aldine de Gruyter.

    Google Scholar 

  • Berk, R. A., Boruch, R., Chambers, D., Rossi, P. H. & Witte, A. (1985). Social policy experimentation: A position paper. Evaluation Review 9, 387–430.

    Google Scholar 

  • Berk, R. A., Campbell, A., Klapp, R. & Western, B. (1992). The differential deterrent effects of an arrest in incidents of domestic violence: A bayesian analysis of four randomized field experiments. American Sociological Review 5, 689–708

    Google Scholar 

  • Blitstein, J. L., Hannab, P. J., Murry, D. M. & Shadish, W. R. (2005a). Increasing the degrees of freedom in existing group randomized trials: The df approach. Evaluation Review 29, 241–267.

    Google Scholar 

  • Blitstein, J. L., Hannab, P. J., Murry, D. M. & Shadish, W. R. (2005b). Increasing the degrees of freedom in future group randomized trials: The df approach. Evaluation Review 29, 268–286.

    Google Scholar 

  • Bloom, H. S., Hill, C. J. & Riccio, J. A. (2002). Linking program implementation and effectiveness: Lessons from a pooled sample of welfare-to-work experiments. Journal of Policy Analysis and Management 22, 551–575.

    Google Scholar 

  • Boruch, R. F. (1997). Randomized field experiments for planning and evaluation: A practical guide. Newbury Park, CA: Sage.

    Google Scholar 

  • Box, G. E. P., Hunter, W. G. & Hunter, J. S. (1978). Statistics for experimenters. New York: Wiley.

    Google Scholar 

  • Briggs, D. C. (2005). Meta-analysis: A case study. Evaluation Review 29, 87–127.

    Article  Google Scholar 

  • Campbell, D. T. & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Boston: Houghton Mifflin.

    Google Scholar 

  • Cox, D. R. (1958). Planning of experiments. New York: Wiley.

    Google Scholar 

  • Cronbach, L. J. (1982). Designing evaluation of educational and social programs. San Francisco: Jossey-Bass.

    Google Scholar 

  • Fisher, R. A. (1935). The design of experiments. New York: Hafner.

    Google Scholar 

  • Foster, M. E. & Fang, G. Y. (2004). Alternatives to handling attrition: An illustration using data from the fast track evaluation. Evaluation Review 28, 434–464.

    Article  Google Scholar 

  • Heckman, J. J. & Smith, J. A. (1995). Assessing the case for randomized social experiments. Journal of Economic Perspectives 9, 85–110.

    Google Scholar 

  • Neyman, J. [1923] (1990). On the application of probability theory to agricultural experiments. Essay on principles. Section 9. Translated and edited by D. M. Dabrowska and T. P. Speed. Statistical Science 5, 465–471.

  • Petitti, D. E. (1993). Meta-analysis, decision analysis, cost-effectiveness. 2nd edn. New York: Oxford University Press.

    Google Scholar 

  • Raudenbush, S. W. & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods. Thousand Oaks, California.: Sage.

    Google Scholar 

  • Rosenbaum, P. R. (2002). Observational studies. 2nd edn. New York: Springer.

    Google Scholar 

  • Rubin, D. B. (1986). Which Ifs have causal answers. Journal of the American Statistical Association 81, 961–962.

    Google Scholar 

  • Rubin, D. B. (1990). Comment: Neyman (1923) and causal inference in experiments and observational studies. Statistical Science 5, 472–480.

    Google Scholar 

  • Shadish, W. R., Cook, T. D. & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.

    Google Scholar 

  • Wachter, K. W. (1988). Disturbed about meta-analysis? Science 241, 1407–1408.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Richard A. Berk.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Berk, R.A. Randomized experiments as the bronze standard. J Exp Criminol 1, 417–433 (2005). https://doi.org/10.1007/s11292-005-3538-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11292-005-3538-2

Key words

Navigation