Weitere Artikel dieser Ausgabe durch Wischen aufrufen
The use of experiments— particularly randomized controlled trials (RCTs) where subjects are randomly assigned to treatment and control conditions—has rarely been applied to the process of improving the design of energy efficiency programs and, more fundamentally, to determining the net savings from energy efficiency programs. This paper discusses the use of experimentation in the energy efficiency program field with the hope of explaining how these experiments can be used, and identifying the barriers to their use will cause more experimentation to occur. First, a brief overview of experimental methods is presented. This discussion describes the advantages and disadvantages of conducting experiments in the context of the development and evaluation of energy efficiency programs. It then discusses barriers to the use of experimental methods (including cost and equity issues) and suggests some ways of overcoming these barriers. Finally, recommendations are made for implementing key social experiments, discussing the types of energy efficiency programs and issues that can make use of experimentation and variables that one might use for selecting treatments.
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
Allcott, H. (2011). Social norms and energy conservation. Journal of Public Economics, 95(9–10), 1082–1095. CrossRef
Ayres, I., Raseman, S., & Shih, A. (2009). Evidence from two large field experiments that peer comparison feedback can reduce residential energy usage. NBER Working Paper No. 15386, Washington, DC: National Bureau of Economic Research.
Blumstein, C. (2010). Program evaluation and incentives for administrators of energy-efficiency programs: can evaluation solve the principal/agent problem? Energy Policy, 38, 6232–6239. CrossRef
Campbell, D. (1969). Reforms as experiments. American Psychologist, 24, 409–429. CrossRef
Campbell, D. (1988). The experimenting society. In D. Campbell & S. Overman (Eds.), Methodology and epistemology for social science: selected papers (pp. 290–314). Chicago, IL: University of Chicago Press.
Cappers, P. (2011). Personal communication with Peter Cappers, Lawrence Berkeley National Laboratory, February 25.
Coalition for Evidence-Based Policy (2009). Program Evaluation: A Variety of Rigorous Methods Can Help Identify Effective Interventions, Washington, DC: US General Accounting Office.
Cook, T., & Campbell, D. (1979). Quasi-experimentation: design and analysis issues for field settings. Chicago: Rand McNally.
Cook, T., & Shadish, W. (1994). Social experiments: some developments over the past fifteen years. Annual Review of Psychology, 45, 545–580. CrossRef
Cotterill, S., John, P., Liu, H., & Nomura, H. (2009). Mobilizing citizen effort to enhance environmental outcomes: a randomized controlled trial of a door-to-door recycling campaign. Journal of Environmental Management, 91(2), 403–410. CrossRef
Duflo, E., Glennerster, R., & Kremer, M. (2007). Using randomization in development economics research: a toolkit. London, UK: Centre for Economic Policy Research.
Ehrhardt-Martinez, K., & Laitner, J. (2009). Pursuing energy-efficient behavior in a regulatory environment: motivating policymakers, program administrators, and program implementers. Berkeley: California Institute for Energy and Environment.
Electric Power Research Institute [EPRI]. (2010). Guidelines for designing effective energy information feedback pilots: research protocols. Palo Alto: Electric Power Research Institute.
Faruqui, A., & Sergici, S. (2009). Household response to dynamic pricing of electricity: a survey of the experimental evidence. Washington, DC: The Brattle Group, Edison Electric Institute and the Electric Power Research Institute.
Faruqui, A., & Wood, L. (2008). Quantifying the benefits of dynamic pricing in the mass market. Washington: Edison Electric Institute.
Faruqui, A., Sergici, S., & Sharif, A. (2009). The impact of informational feedback on energy consumption: a survey of the experimental evidence. Washington: The Brattle Group.
Fischer, C. (2008). Feedback on household electricity consumption: a tool for saving energy? Energy Efficiency, 1, 79–104. CrossRef
Greenberg, D., & Shroder, M. (2004). The digest of social experiments (3rd ed.). Washington: Urban Institute Press.
Haynes, L., Service, O., Goldacre, B., & Torggerson, D. (2012). Test, learn, adapt: developing public policy with randomised controlled trials. London: Cabinet Office Behavioural Insights Team.
Hines, P. (2011). Personal communication with Paul Hines, University of Vermont, May 1.
Joskow, P., & Schmalensee, R. (1986). Incentive regulation for electric utilities. Yale Journal on Regulation, 4, 1–49.
Laitner, S., Ehrhardt-Martinez, K., & Knight, C. (2009). The climate imperative and innovative behavior: encouraging greater advances in the production of energy-efficient technologies and services. Berkeley: California Institute for Energy and Environment.
Lutzenhiser, L. (2009). Behavioral assumptions underlying California residential sector energy efficiency programs. Berkeley: California Institute for Energy and Environment.
Megdal, L., & Bender, S. (2006). “Evaluating media campaign effectiveness: others do it why don’t we?” Proceedings of the 2006 summer study on energy efficiency in buildings. Washington: American Council for an Energy-Efficient Economy.
Miguel, E., & Kremer, M. (2004). Worms: identifying impacts on education and health on treatment externalities. Kremer Econometrica, 72(1), 159–217.
Shadish, Cook W, T. & Campbell, D. (2002). Experimental and quasi-experimental design for generalized causal inference, Houghton Mifflin.
Stern, P. (2008). Environmentally significant behavior in the home. In A. Lewis (Ed.), The Cambridge handbook of psychology and economic behavior (pp. 363–382). Cambridge: Cambridge University Press. CrossRef
Sullivan, M. (2009). Using experiments to foster innovation and improve the effectiveness of energy efficiency programs. Berkeley: California Institute for Energy and Environment.
Summit Blue Consulting (2009). Impact evaluation of OPOWER SMUD pilot study. Boulder, CO: Summit Blue Consulting [now called Navigant Consulting].
Tiedemann, K. (2011). Behavioral change strategies that work: a review and analysis of field experiments targeting residential energy use behavior. In K. Erhardt-Martinez & S. Laitner (Eds.), People-centered initiatives for increasing energy savings, E-book. Washington: American Council for an Energy-Efficient Economy. Chapter 21.
Todd, A., Stuart, E., Schiller, S., & Goldman, C. (2012). Evaluation, measurement and verification (EM&V) of residential behavior-based energy efficiency programs: issues and recommendations, DOE/EE-0734, State and Local Energy Efficiency Action Network. Washington: US Department of Energy.
Torgerson, D., & Torgerson, C. (2008). Designing randomised trials in health, education and the social sciences. Basingstoke: Palgrave Macmillan. CrossRef
Vine, E. (2008). Strategies and policies for improving energy efficiency programs: closing the loop between evaluation and implementation. Energy Policy, 36, 3872–3881. CrossRef
Vine, E., Kushler, M., & York, D. (2007). Energy myth ten—energy efficiency measures are unreliable, unpredictable, and unenforceable. In B. K. Sovacol & M. A. Brown (Eds.), Energy and American society—thirteen myths. New York: Springer.
Vine, E., Hall, N., Keating, K., Kushler, M., & Prahl, R. (2012). Emerging issues in the evaluation of energy-efficiency programs: the US experience. Energy Efficiency, 5, 5–17. CrossRef
Vine, E., Hall, N., Keating, K., Kushler, M., & Prahl, R. (2013). Emerging evaluation issues: persistence, behavior, rebound and policy. Energy Efficiency, 6, 329–339. CrossRef
- Experimentation and the evaluation of energy efficiency programs
- Springer Netherlands
Systemische Notwendigkeit zur Weiterentwicklung von Hybridnetzen