Skip to main content
Top
Published in:
Cover of the book

Open Access 2017 | OriginalPaper | Chapter

98. Learning from Evaluation

Author : Olivier Serrat

Published in: Knowledge Solutions

Publisher: Springer Singapore

Activate our intelligent search to find suitable subject content or patents.

search-config
download
DOWNLOAD
print
PRINT
insite
SEARCH
loading …

Abstract

Evaluation serves two main purposes: accountability and learning. Development agencies have tended to prioritize the first, and given responsibility for that to centralized units. But evaluation for learning is the area where observers find the greatest need today and tomorrow.
In a Word Evaluation serves two main purposes: accountability and learning. Development agencies have tended to prioritize the first, and given responsibility for that to centralized units. But evaluation for learning is the area where observers find the greatest need today and tomorrow.
https://static-content.springer.com/image/chp%3A10.1007%2F978-981-10-0983-9_98/MediaObjects/372422_1_En_98_Figa_HTML.gif

Redirecting Division-of-Labor Approaches

Because the range of types (not to mention levels) of learning is broad, organizations have, from the early days, followed a division-of-labor approach to ascribing responsibility for learning. Typically, responsibility is vested in a policy (or research) unit to allow managers to focus on decision-making while other organizational constituents generate information and execute plans. Without doubt, this has encouraged compartmentalization of whatever learning is generated. What is more, since organizational constituents operate in different cultures to meet different priorities, each questions the value added by the arrangement.
Give me a fruitful error any time, full of seeds, bursting with its own corrections. You can keep your sterile truth for yourself.
—Vilfredo Pareto

Increasing Value Added from Independent Operations Evaluation

In many development agencies, independent evaluation contributes to decision-making throughout the project cycle and in the agencies as a whole, covering all aspects of sovereign and sovereign-guaranteed operations (public sector operations); nonsovereign operations; and the policies, strategies, practices, and procedures that govern them. The changing scope of evaluations and fast-rising expectations in relation to their use are welcome. However, the broad spectrum of independent evaluation demands that evaluation units strengthen and monitor the results focus of their operations. This means that the relevance and usefulness of evaluation findings to core audiences should be enhanced. Recurrent requests are that evaluation units should improve the timeliness of their evaluations, strengthen the operational bearing of the findings, and increase access to and exchange of the lessons. Minimum steps to increase value added from independent evaluation involve (i) adhering to strategic principles, (ii) sharpening evaluation strategies, (iii) distinguishing recommendation typologies, (iv) making recommendations better, (v) reporting evaluation findings, and (vi) tracking action on recommendations (ADB 2007). Here, performance management tools such as the balanced scorecard system might enable them to measure nonfinancial and financial results, covering soft but essential areas as client satisfaction, quality and product cycle times, effectiveness of new product development, and the building of organizational and staff skills.
Even so, the problématique of independent evaluation is still more complex. At the request of shareholders tasked with reporting to political leadership, taxpayers, and citizens, feedback from evaluation studies has often tended to support accountability (and hence provide for control), not serve as an important foundation block of a learning organization. Some now argue for a reinterpretation of the notion of accountability. Others cite lack of utility; the perverse, unintended consequences of evaluation for accountability, such as diversion of resources; emphasis on justification rather than improvement; distortion of program activities; incentive to lie, cheat, and distort; and misplaced accent on control (Bemelmans-Videc et al. 2007).
The tension between the two functions of evaluation demands also that evaluation agencies distinguish primary audiences more clearly. Barring some overlap, these are audiences for accountability or learning. Obviously, this has implications for the knowledge products and services that evaluation units should deploy to reach different target groups, including the dissemination tactics associated with each, and underlines the message that one approach cannot be expected to suit all audiences. The key ingredients of the distinct reports that would have to be tailored for each pertain to evidence (by means of persuasive argument and authority), context (by means of audience context specificity and actionable recommendations), and engagement (by means of presentation of evidence-informed opinions, clear language and writing style, and appearance and design). Naturally, several knowledge management tools mentioned earlier would be leveraged to quicken the learning cycle of practice, experience, synthesis and innovation, dissemination, and uptake with one-time, near-term, and continuous efforts.
This is not to say that evaluation units face an either-or situation. Both accountability and learning are important goals for evaluation feedback. One challenge is to make accountability accountable. In essence, evaluation units are placing increased emphasis on results orientation while maintaining traditional checks on use of inputs and compliance with procedures. Lack of clarity on why evaluations for accountability are carried out, and what purpose they are expected to serve, contributes to their frequent lack of utility. Moreover, if evaluations for accountability add only limited value, resources devoted to documenting accountability can have a negative effect, perversely enough. However, evaluation for learning is the area where observers find the greatest need today and tomorrow, and evaluation units should be retooled to meet it. The table below suggests how work programs for evaluation might be reinterpreted to emphasize organizational learning.
Table. Programming work for organizational learning
Organizational level
Strategic drivera
Reporting mechanism
Content/focus
Responsibility
Primary user and uses
Timing
Corporate
      
Policy
      
Strategy
      
Operations
      
aThe strategic drivers might be (i) developing evaluation capacity, (ii) informing corporate risk assessments by offices and departments, (iii) conducting evaluations in anticipation of known upcoming reviews, (iv) monitoring and evaluating performance, (v) critiquing conventional wisdom about development practice, and (vi) responding to requests from offices and departments
Source ADB (2007)
Evaluation capacity development promises much to the learning organization, and should be an activity in which centralized evaluation units have a comparative advantage. Capacity is the ability of people, organizations, and society as a whole to manage their affairs successfully; and capacity to undertake effective monitoring and evaluation is a determining factor of aid effectiveness. Evaluation capacity development is the process of reinforcing or establishing the skills, resources, structures, and commitment to conduct and use monitoring and evaluation over time. Many key decisions must be made when starting to develop evaluation capacity internally in a strategic way.1 Among the most important are
  • Architecture Locating and structuring evaluation functions and their coordination.
  • Strengthening evaluation demand Ensuring that there is an effective and well-managed demand for evaluations.
  • Strengthening evaluation supply Making certain that the skills and competencies are in place with appropriate organizational support.
  • Institutionalizing evaluations Building evaluation into policy-making systems.
Why development agencies should want to develop in-house, self-evaluation capacity is patently clear. Stronger evaluation capacity will help them
  • Develop as a learning organization.
  • Take ownership of their visions for poverty reduction, if the evaluation vision is aligned with that.
  • Profit more effectively from formal evaluations.
  • Make self-evaluations an important part of their activities.
  • Focus quality improvement efforts.
  • Increase the benefits and decrease the costs associated with their operations.
  • Augment their ability to change programming midstream and adapt in a dynamic, unpredictable environment.
  • Build evaluation equity, if they are then better able to conduct more of their own self-evaluation, instead of hiring them out.
  • Shorten the learning cycle.
The below figure poses key questions concerning how an organization may learn from evaluation, combining the two elements of learning by involvement and learning by communication. It provides the context within which to visualize continuing efforts to increase value added from independent evaluation, and underscores the role in internal evaluation capacity development. It also makes a strong case for more research into how development agencies learn how to learn.
The opinions expressed in this chapter are those of the author(s) and do not necessarily reflect the views of the Asian Development Bank, its Board of Directors, or the countries they represent.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 3.0 IGO license (http://​creativecommons.​org/​licenses/​by-nc/​3.​0/​igo/​) which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the Asian Development Bank, provide a link to the Creative Commons license and indicate if changes were made.
Any dispute related to the use of the works of the Asian Development Bank that cannot be settled amicably shall be submitted to arbitration pursuant to the UNCITRAL rules. The use of the Asian Development Bank’s name for any purpose other than for attribution, and the use of the Asian Development Bank’s logo, shall be subject to a separate written license agreement between the Asian Development Bank and the user and is not authorized as part of this CC-IGO license. Note that the link provided above includes additional terms and conditions of the license.
The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Footnotes
1
A discussion of the benefits of external evaluation capacity development is in Serrat (2008).
 
Literature
go back to reference ADB (2007) Acting on recommendations and learning from lessons in 2007. Manila ADB (2007) Acting on recommendations and learning from lessons in 2007. Manila
go back to reference Bemelmans-Videc ML, Lonsdale J, Perrin B (eds) (2007) Making accountability work: dilemmas for evaluation and for audit. Transaction Publishers, New Brunswick, New Jersey Bemelmans-Videc ML, Lonsdale J, Perrin B (eds) (2007) Making accountability work: dilemmas for evaluation and for audit. Transaction Publishers, New Brunswick, New Jersey
go back to reference Serrat O (2008) Increasing value added from operations evaluation. ADB, Manila (unpublished) Serrat O (2008) Increasing value added from operations evaluation. ADB, Manila (unpublished)
go back to reference ADB (2009) Learning for change in ADB. Manila, ADB ADB (2009) Learning for change in ADB. Manila, ADB
Metadata
Title
Learning from Evaluation
Author
Olivier Serrat
Copyright Year
2017
Publisher
Springer Singapore
DOI
https://doi.org/10.1007/978-981-10-0983-9_98