Abstract
In this paper, the strengths and weakneses of randomized field experiments are discussed. Although it seems to be common knowledge that random assignment balances experimental and control groups on all confounders, other features of randomized field experiments are somewhat less appreciated. These include the role of random assignment in statistical inference and representations of the mechanisms by which the treatment has its impact. Randomized experiments also have important limitations and are subject to the fidelity with which they are implemented. In the end, randomized field experiments are still the best way to estimate causal effects, but are a considerable distance from perfection.
Similar content being viewed by others
References
Angrist, J., Imbens, G. W. & Rubin, D. B. (1996). Identification of causal effects using instrumental variables (with discussion). Journal of the American Statistical Association 91, 441–472.
Berk, R. A. (2003). Regression analysis: A constructive critique. Newbury Park: Sage.
Berk, R. A. & de Leeuw, J. (1999). An evaluation of California's inmate classification system using a generalized regression discontinuity design. Journal of the American Statistical Association 94, 1045–1052.
Berk, R. A. & Freedman, D. A. (2003). Statistical assumptions as empirical commitments. In T. G. Blomberg & S. Cohen (Eds.), Punishment and social control: Essays in honor of Sheldon Messinger (pp. 235–254). 2nd edn. New York: Aldine de Gruyter.
Berk, R. A., Boruch, R., Chambers, D., Rossi, P. H. & Witte, A. (1985). Social policy experimentation: A position paper. Evaluation Review 9, 387–430.
Berk, R. A., Campbell, A., Klapp, R. & Western, B. (1992). The differential deterrent effects of an arrest in incidents of domestic violence: A bayesian analysis of four randomized field experiments. American Sociological Review 5, 689–708
Blitstein, J. L., Hannab, P. J., Murry, D. M. & Shadish, W. R. (2005a). Increasing the degrees of freedom in existing group randomized trials: The df approach. Evaluation Review 29, 241–267.
Blitstein, J. L., Hannab, P. J., Murry, D. M. & Shadish, W. R. (2005b). Increasing the degrees of freedom in future group randomized trials: The df approach. Evaluation Review 29, 268–286.
Bloom, H. S., Hill, C. J. & Riccio, J. A. (2002). Linking program implementation and effectiveness: Lessons from a pooled sample of welfare-to-work experiments. Journal of Policy Analysis and Management 22, 551–575.
Boruch, R. F. (1997). Randomized field experiments for planning and evaluation: A practical guide. Newbury Park, CA: Sage.
Box, G. E. P., Hunter, W. G. & Hunter, J. S. (1978). Statistics for experimenters. New York: Wiley.
Briggs, D. C. (2005). Meta-analysis: A case study. Evaluation Review 29, 87–127.
Campbell, D. T. & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Boston: Houghton Mifflin.
Cox, D. R. (1958). Planning of experiments. New York: Wiley.
Cronbach, L. J. (1982). Designing evaluation of educational and social programs. San Francisco: Jossey-Bass.
Fisher, R. A. (1935). The design of experiments. New York: Hafner.
Foster, M. E. & Fang, G. Y. (2004). Alternatives to handling attrition: An illustration using data from the fast track evaluation. Evaluation Review 28, 434–464.
Heckman, J. J. & Smith, J. A. (1995). Assessing the case for randomized social experiments. Journal of Economic Perspectives 9, 85–110.
Neyman, J. [1923] (1990). On the application of probability theory to agricultural experiments. Essay on principles. Section 9. Translated and edited by D. M. Dabrowska and T. P. Speed. Statistical Science 5, 465–471.
Petitti, D. E. (1993). Meta-analysis, decision analysis, cost-effectiveness. 2nd edn. New York: Oxford University Press.
Raudenbush, S. W. & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods. Thousand Oaks, California.: Sage.
Rosenbaum, P. R. (2002). Observational studies. 2nd edn. New York: Springer.
Rubin, D. B. (1986). Which Ifs have causal answers. Journal of the American Statistical Association 81, 961–962.
Rubin, D. B. (1990). Comment: Neyman (1923) and causal inference in experiments and observational studies. Statistical Science 5, 472–480.
Shadish, W. R., Cook, T. D. & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.
Wachter, K. W. (1988). Disturbed about meta-analysis? Science 241, 1407–1408.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Berk, R.A. Randomized experiments as the bronze standard. J Exp Criminol 1, 417–433 (2005). https://doi.org/10.1007/s11292-005-3538-2
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11292-005-3538-2