Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2017 | Supplement | Buchkapitel

6. An Analytical Framework for Evaluating a Diverse Climate Change Portfolio

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The Climate Change Sub-programme (CCSP) of the United Nations Environment Programme (UNEP) has four components: Adaptation, Mitigation, REDD+ and Science and Outreach. It cuts across all UNEP divisions located in Nairobi and Paris, and relies a lot on partnerships to drive its work and scale up its impact. The CCSP evaluation conducted by the UNEP Evaluation Office over the period 2013–2014, aimed at assessing the relevance and overall performance of the CCSP between 2008 and 2013. The complexity, geographical spread and rather weak results framework of the CCSP, coupled to rather limited evaluation resources and a shortage of evaluative evidence, required the Evaluation Office to develop an innovative analytical framework and data collection approach for this evaluation. It combined three areas of focus (strategic relevance, sub-programme performance and factors affecting performance), five interlinked units of analysis (UNEP corporate, sub-programme, country, component and project level), a Theory of Change approach and an appropriate combination of data collection tools. This chapter discusses the overall evaluation approach and process, followed by a summary of lessons learned which could be useful for future similar exercises.

6.1 Introduction

The United Nations Environment Programme (UNEP) has been working on climate-related issues for more than 20 years,1 but UNEP has a formal Climate Change Sub-programme (CCSP) only since the Medium-Term Strategy (MTS) for 2010–2013. According to the MTS 2010–20132 UNEP’s CCSP objective is “to strengthen the ability of countries to integrate climate change responses into national development processes”. UNEP is expected to support countries and institutions to meet the challenges of climate change by promoting ecosystem-based approaches to adaptation, up-scaling the use of and facilitating access to financing for clean and renewable energy and technologies, and capitalizing on the opportunities of reducing emissions from deforestation and forest degradation. UNEP is also working to improve awareness and understanding of climate change science for policy decision-making. As such, the UNEP CCSP is organized around four components: Adaptation, Mitigation, REDD+, and Science and Outreach. Each component has its own Expected Accomplishments (direct results expected from UNEP’s interventions) achieved through Programme of Work Outputs (different products and services delivered by UNEP).
In UNEP, Sub-programmes cut across the divisional structure of the organization and the CCSP is the most cross-cutting of all sub-programmes in UNEP. For instance, the Division for Technology, Industry and Economics, based in Paris, is accountable for delivering the Mitigation component and the Division of Environmental Policy Implementation, based in Nairobi, manages the majority of projects under the Adaptation and REDD+ components. The Division for Early Warning and Assessments, based in Nairobi, is accountable for the delivery of certain assessments and assessment capacity building under the Science and Outreach component. The structural complexity and geographical spread of the CCSP posed specific challenges for the evaluation, as described below.
The CCSP heavily relies on partnerships to drive the work. These partnerships are important both for global efforts, such as the preparation of annual global reports that help establish norms and track progress in achieving them, as for efforts at the regional and country level. Partners often bring complementary technical skills and provide access to decision making fora. Since UNEP is a non-resident agency, it must also rely on operating through partners at the country level. Cooperation with government and other local partners is necessary because the country projects/pilots serve the double purpose of developing and testing concepts and tools, but also to build country ownership and capacity to use them to promote in-country replication. Also this posed challenges for the evaluation, in particular in terms of attribution of Sub-programme results to UNEP.

6.2 Scope of the Evaluation

In accordance with the UNEP Evaluation Policy, all Sub-programmes are evaluated on a rotating basis every 4 years.3 They are part of a larger evaluation architecture that include project, sub-programme and UNEP-wide, Medium Term Strategy evaluations. Sub-programme evaluations are conducted by the UNEP Evaluation Office in consultation with the relevant UNEP Divisions. While the Evaluation Office reports to the UNEP Executive Director, its evaluations are conducted in an independent manner and evaluation findings are reported without interference. However, the Evaluation Office does not enjoy financial independence and its limited financial and human resources are sometimes a major challenge.4
The CCSP evaluation aimed at assessing the relevance and overall performance of UNEP’s work related to climate change from 2008 to 2013 according to standard evaluation criteria (relevance, effectiveness, efficiency, sustainability and impact). The evaluation assessed whether, in the period under review, UNEP was able to strengthen the ability of countries to integrate climate change responses into national development processes, by providing environmental leadership in the international response to climate change and complementing other processes and the work of other institutions. The evaluation was an in-depth, independent exercise conducted by a multidisciplinary team of consultants and Evaluation Office staff, with oversight from the UNEP Evaluation Office. The author was in charge of overall design, management and quality assurance of the evaluation process and participated in interviews and country visits.
The evaluation tried to answer the following key questions:
  • Are the Sub-programme objectives and strategy relevant to the global challenges posed by climate change, global, regional and country needs, the international response and UNEP’s evolving mandate and capacity in this area?
  • Has UNEP achieved its objectives in the area of climate change? Have projects been efficiently implemented and produced tangible outputs as expected? Are the required external factors in place so that the CCSP outputs can lead the expected outcomes and, ultimately, to sustainable, large-scale impact?
  • What are the key factors affecting sub-programme performance, such as portfolio design and structure; human and financial resources administration; collaboration and partnerships; and monitoring and evaluation?
The evaluation covered the four components of the CCSP. However, because the Science and Outreach component was largely implemented within projects belonging to the first three components, the Evaluation Team decided to treat Science and Outreach as a cross-cutting issue rather than a stand-alone component. The portfolio under review included 57 projects and programmes classified by UNEP as belonging to the CCSP and that were either on-going or had been initiated after 1 January 2008. A little over half (32) of these projects were completed at the time of the evaluation, 20 were on-going and the remaining 5 were inactive or had an unknown status. Within this portfolio, there were a number of interventions known as “umbrella projects”, which included several, independent sub-projects contributing to the same Expected Accomplishment or (set of) Programme of Work Outputs. If all sub-projects were counted, the total evaluation portfolio comprised about 88 interventions. Their spread over the different thematic components was as follows: 60 % were mitigation, 23 % were adaptation, 5 % were REDD, and 9 % science and outreach. The remaining combined both mitigation and adaptation objectives.

6.3 Challenges to the Evaluation

A rapid assessment of the evaluability of the sub-programme during the inception phase had brought to light several challenges the evaluation was bound to face. First, it was expected to assess a large, highly diverse and dispersed project portfolio, spread over four components, managed by various branches across the organization based in different duty stations. Second, a review of strategic documents had revealed serious issues with the results framework of the sub-programme, namely its internal logic, the results levels at which Expected Accomplishments and Programme of Work Outputs were pitched and the changes in results statements, indicators and targets every 2 years. Table 6.1 presents the results framework for the mitigation component as an illustration. Third, the assessment of strategic relevance would prove quite challenging considering the rapidly changing political and institutional context such as new decisions immerging from UNFCCC COPs and others.
Table 6.1
Results framework of the mitigation component of the Climate Change Sub-programme
Programme of work 2010–2011
Programme of work 2012–2013
Expected accomplishment
Programme of work output
Expected accomplishment
Programme of work output
EA(b) Countries make sound policy, technology, and investment choices that lead to a reduction in greenhouse gas emissions and potential co-benefits, with a focus on clean and renewable energy sources, energy efficiency and energy conservation
1. Technical and economic assessments of renewable energy potentials are undertaken and used by countries in making energy policy and investment decisions favouring renewable energy sources
EA(b) Low carbon and clean energy sources and technology alternatives are increasingly adopted, inefficient technologies are phased out and economic growth, pollution and greenhouse gas emissions are decoupled by countries based on technical and economic assessments, cooperation, policy advice, legislative support and catalytic financing mechanisms
1. Economic and technical (macroeconomic, technology and resource) assessments of climate change mitigation options that include macroeconomic and broad environmental considerations are undertaken and used by countries and by major groups in developing broad national mitigation plans
2. National climate technology plans are developed and used to promote markets for cleaner energy technologies and hasten the phase-out of obsolete technologies
2. Technology-specific plans are developed through public-private collaboration and used to promote markets for and transfer of cleaner energy technologies and speed up the phase-out of obsolete technologies in a manner that can be monitored, reported and verified
3. Knowledge networks to inform and support key stakeholders in the reform of policies and the implementation of programmes for renewable energy, energy efficiency, and reduced greenhouse gas emissions are established
3. Knowledge networks and United Nations partnerships to inform and support key stakeholders in the reform of policies, economic incentives and the implementation of programmes for renewable energy, energy efficiency and reduced greenhouse-gas emissions are established, supported and used to replicate successful approaches
4. Macro-economic and sectoral analyses of policy options for, fostering low greenhouse gas emissions, including technology transfer, are undertaken and used
5. Sustainability criteria and evaluation tools for biofuels development are refined globally and applied nationally
6. Public/private partnerships are promoted and best practices are applied leading to energy efficiency improvements and greenhouse gas emission reductions
EA(c) Improved technologies are deployed and obsolescent technologies phased out, through financing from private and public sources including the Clean Development Mechanism and the Joint Implementation Mechanism of the Kyoto Protocol
1. Barriers are removed and access is improved to financing for renewable and energy efficient technologies at the national level through targeted analysis of costs, risks and opportunities of clean energy and low carbon technologies in partnership with the finance sector
EA(c) Countries’ access to climate change finance is facilitated at all levels and successful innovative financing mechanisms are assessed and promoted at the regional and global level
1. Financing barriers are removed and access to financing is improved for renewable and energy-efficient technologies through public-private partnerships that identify costs, risks, and opportunities for clean energy and low-carbon technologies
2. Clean Development Mechanism projects are stimulated through market facilitation and the application of relevant tools, methodologies and global analyses, including on environmental sustainability
2. Use of the Clean Development Mechanism and other innovative approaches to mitigation finance is stimulated through analyses and the development and application of relevant tools and methodologies, including on environmental sustainability and measuring, reporting and verification compatibility
3. National institutional capacity for assessing and allocating public funding and leveraging private investment for clean energy is strengthened
3. Institutional capacity for assessing and allocating public funding and leveraging private investment for clean energy is strengthened and new climate finance instruments are developed and applied by financiers, lenders and investors
4. New climate finance instruments are launched and investments in clean energy are made by first-mover financiers and lenders and investors
5. Financial institutions adopt best climate, environmental and sustainability practices
Sources: UNEP Biennial Programme of Work and Budget for 2010–2011; UNEP Biennial Programme of Work and Budget for 2010–2011
At the same time, the evaluation would need to cope with very limited evaluative evidence. For instance, monitoring of progress at the sub-programme level was limited to output milestones and weak outcome indicators. Project reporting was donor-specific, incomplete and focused on activities and outputs and, over the period covered by the evaluation, less than one quarter of the projects in the portfolio under review had been independently evaluated due to resource limitations and a lack of pressure from senior management and Member States. In addition, this ambitious evaluation had to be carried out with a very limited budget, which allowed the recruitment of only three consultants for a relatively short period of time.
These challenges were, however, not specific to the Climate Change Sub-programme evaluation. Similar issues were encountered by previous sub-programme evaluations, requiring the Evaluation Office to develop an innovative analytical framework and data collection approach for sub-programme evaluations.5 These were further refined for the CCSP evaluation and are discussed in the following sections, followed by a summary of lessons learned which could be useful for future similar exercises.

6.4 Analytical Framework of the Evaluation

The evaluation assessed the Climate Change Sub-programme in three areas of focus, corresponding to three distinct but strongly related clusters of evaluation questions (see Table 6.2). First, the evaluation assessed the strategic relevance and appropriateness of sub-programme objectives and strategy. It analysed the clarity and coherence of the CCSP’s vision, objectives and intervention strategy, within the changing global, regional and national context, and the evolving overall mandate and comparative advantages of UNEP. Second, the evaluation assessed the overall performance of the CCSP in terms of effectiveness (i.e. achievement of outcomes), sustainability, up-scaling and catalytic effects. It also reviewed the potential or likelihood that outcomes were leading towards impact. Which outcomes were assessed, was determined by a reconstruction of the sub-programme’s Theory of Change (see below). Third, the evaluation examined the factors affecting performance in more detail: intervention design issues, organizational aspects, partnerships etc. that affected the overall performance of the sub-programme.
Table 6.2
Areas of focus and examples of evaluation questions
Areas of focus
Examples of evaluation questions
Strategic relevance
Are the sub-programme objectives and strategy relevant to the global challenges posed by climate change; global, regional and country needs; the international response; and UNEP’s evolving mandate and capacity in this area?
How are the respective strategies of the CCSP components designed to ensure relevance in their respective thematic areas and how do their efforts address crosscutting areas (DRR, land-use, etc.)?
Sub-programme performance
Has UNEP achieved its expected accomplishments in the area of climate change?
Have projects been efficiently implemented and produced tangible outputs as expected?
Are the main drivers present and are the key assumptions valid so that the outputs delivered by the sub-programme can lead to sustainable, higher-level changes at outcome and impact level?
Factors affecting performance
What were the key factors affecting sub-programme performance?
How well were the overall sub-programme and its project portfolio designed and structured?
Are organizational arrangements adequate, and what is the quality of management within the operational units?
Have human and financial resources been optimally deployed to achieve sub-programme objectives?
What role did partnerships play in achieving sub-programme objectives and are these optimally developed?
How well were sub-programme activities and achievements monitored and evaluated?
Source: UNEP Evaluation Office 2014/2015, Evaluation of the UNEP Sub-programme on Climate Change
These areas of focus were not addressed in sequence but simultaneously as they are strongly linked to each other and dynamic as shown in Fig. 6.1. For instance, elements of strategic relevance of UNEP’s involvement in Climate Change determine the scope and scale of the sub-programme and shape the kinds of products, services and delivery mechanisms are used to reach core objectives. Decisions surrounding strategic relevance of the CCSP thereby also influence the administrative, management and implementation structure, and other factors that affect performance. Sub-programme performance, in turn, affects funding availability and programme orientation. Progress made on expected accomplishments and impact also changes the priority needs of countries and other stakeholders, justifying strategic adjustments to sub-programme objectives and strategies.
As illustrated in Fig. 6.2, the evaluation was conducted at five units of analysis. The two upper units are UNEP corporate and the Sub-programme itself. Considering the vast number and high variety of interventions, and highly diverse institutional arrangements and other factors affecting performance under the CCSP, neither UNEP or the sub-programme as a whole were the most practical and straightforward level at which to conduct analysis. They were also not the best level at which to attribute performance and uncover lessons learned.
Therefore, three lower units of analysis were used, which, combined, would provide sufficient information and analysis for the assessment of the sub-programme as a whole. The main unit of analysis was the sub-programme component (adaptation, mitigation etc.). At that level, performance could be most easily attributed to the line managers and partners delivering against the Expected Accomplishments of each component. The components were also the best units of analysis for learning, as they were usually better defined and delimited, and less complex than the sub-programme as a whole, but still provided the opportunity to see linkages between interventions either within or between main areas of intervention.
Another useful aggregated level of analysis was the country, where it was possible to obtain insights on the linkages (complementarities and synergies) between projects within a component, between the different components of the CCSP, and also between the CCSP and other sub-programmes within one, confined geographical and political space. The evaluation team visited six countries selected on the basis of geographical spread (spanning the regions of Latin America, Africa, Europe, West Asia, and Asia and the Pacific), presence of the sample projects (see next paragraph) and diversity of UNEP support on climate change in the country. A country case study was prepared for each visited country.
The lowest unit of analysis was the individual project. This was the most appropriate level to unveil factors affecting performance, but as the resources for the evaluation were limited only a sample of projects could be looked at in sufficient depth. The evaluation team prepared rapid reviews of 19 projects – about one third of the entire portfolio. Projects were selected on the basis of four criteria: thematic area (adaptation, mitigation or REDD), project size (based on estimated cost), project scope (global, regional or national) and maturity.
The evaluation made use of a Theory of Change (ToC) approach to address several evaluation questions. A ToC depicts the logical sequence of desired changes (also called “causal pathways” or “results chains”) to which an intervention, programme, strategy etc. is expected to contribute. It shows the causal linkages between changes at different results levels (outputs, outcomes, intermediate states and impact), and the actors and factors influencing those changes. Initially inspired by guidance provided by the Global Environment Fund6 the UNEP Evaluation Office has been systematically using a ToC approach in project and sub-programme evaluations since 2009.
The ToC for each component of the CCSP, and then for the CCSP as a whole, was reconstructed based on a review of strategic documents and UNEP staff interviews, and using best practice in determining correct results levels. Figure 6.3 presents the overall reconstructed ToC for the CCSP. The reconstructed ToC helped identify the expected outcomes of UNEP’s work on Climate Change and the intermediary changes between outcomes and desired impact. Thus, it allowed to cluster outputs and define summary direct outcome statements cutting across components, which would prove very useful to frame data collection and synthesize findings on sub-programme effectiveness.
The reconstruction of the ToC also helped to determine the key external factors affecting the achievement of outcomes, intermediary states and impact, namely the drivers that UNEP could influence through awareness raising, partnerships etc., and the assumptions that were outside UNEP’s control. As these were key determinants of the likelihood of impact, upscaling and sustainability of the sub-programme, it was important to identify them early on so that adequate information on their status could be collected in the course of the data collection phase.
The reconstructed ToC was also used to assess the internal logic and coherence of the formal results framework of the sub-programme. Therefore, the formal results framework comprised of the Sub-programme objective, Expected Accomplishments and Programme of Work Outputs was compared with the reconstructed ToC, and differences between the two were pointed out. For instance, in the formal results framework the results levels at which Expected Accomplishments and Programme of Work Outputs had been set were inconsistent between and within components, some cause-to-effect relationships were either non-existent or had been overlooked, and several key drivers and assumptions had been neglected.
As explained above, attribution of large-scale, global changes to UNEP’s work was difficult due to the largely normative nature of UNEP’s work. Casual pathways from UNEP outputs to impact on the environment and human living conditions tended to be very long, with many external factors coming into play all along the causal pathways. The reconstructed ToC was used to assess the likelihood of impact by considering four distinct elements:
  • UNEP’s effectiveness in achieving its expected direct outcomes. This included verification of progress on output delivery and, most importantly, of the extent to which UNEP outputs led to increased stakeholder capacity, for instance: enhanced access to information and technological know-how, enabling policies and regulatory frameworks, or increased access to climate change finance.
  • The validity of the ToC. The purpose was to prove the causal connection between UNEP direct outcomes and results higher-up the causal pathways. This was done by applying logic, through interviews with key stakeholders, and through analysis of evaluative evidence of progress towards impact at the country or lower geographical levels.
  • The presence of drivers and validity of assumptions. The evaluation had to collect adequate evidence, mostly through desk review and key informant interviews, to verify the presence of an adequate enabling environment in supported countries.
  • Early signs of large-scale progress on medium-term outcomes, intermediate states and impact. In itself this was not evidence of UNEP’s contribution to higher-level changes, but was still necessary to inform stakeholders about global trends. Also, if UNEP’s contribution to direct outcomes had been established, the ToC was very likely to be valid, and the required drivers were present and assumptions were valid, then the likelihood of UNEP’s contribution to impact was very high even though it remained unquantifiable.

6.5 Data Sources

The evaluation team conducted a comprehensive desk review spread over the inception and main evaluation phase. During the inception phase, it helped to reconstruct the ToC of the components and the sub-programme as a whole, and to refine key areas of analysis and the evaluation approach highlighting evaluation challenges and information gaps. During the main evaluation phase, it was essential to collect information on achievements, impact, sustainability and upscaling, and the main factors affecting performance, while also leaving room for unanticipated results. The evaluation team conducted an in-depth analysis of CCSP key documents: background documents on climate change science and technology, the UNFCCC process and Climate Change finance, UNEP strategy and planning documents, evaluation reports (by the UNEP Evaluation Office and UNEP partners), project design documents and progress reports etc.
The evaluation team also conducted a large number of interviews with UNEP staff and managers at headquarters, concerned divisions and branches, in regional offices and country offices. Country visits were organized to conduct interviews with government officials, NGOs, development partners, and recipients of UNEP technical and/or financial support, which enabled the evaluation team to deepen its analysis and understanding of key internal and external factors affecting performance. The six country visits allowed the evaluation team to gauge how beneficiaries and other key stakeholder perceived programme effectiveness, sustainability and likelihood of impact. The country visits also helped the evaluation team to assess synergies and complementarities between UNEP climate change interventions, and also to address cross-cutting issues such as gender.
The evaluation further conducted a staff and partner perception survey. The purpose of the survey was to collect perceptions on sub-programme relevance and effectiveness and key factors affecting performance such as communication and coordination between divisions, inclusiveness within UNEP in determining work plans and budgets, human and financial resources devoted to the CCSP and its components, engagement with partners, monitoring and reporting systems etc. The survey was conducted online using the SurveyMonkey platform. Responses were received from 56 UNEP staff and managers – the response rate was acceptable at about 40 %. Only three partners responded to the survey – a response rate of less than 15 %.

6.6 Evaluation Process

As a first deliverable, the evaluation team produced an inception report based on an initial desk review and introductory interviews within UNEP. It included a more detailed presentation of the evaluation background (global context, programme framework, institutional arrangements and project portfolio); a draft Theory of Change of the sub-programme components; and the evaluation framework (a detailed description of the methodology and analytical tools that the evaluation would use to answer the evaluation questions). The inception report was first reviewed by the Evaluation Office and then shared for comments with the Sub-programme Coordinator and the heads of functional units involved in the sub-programme.
The data collection phase for the evaluation was expected to take place over a relatively short timeframe from January to April 2013. However, some country visits had to be rescheduled due to unavailability of key persons or conflicting schedules within the evaluation team, prolonging the data collection until June 2013. The evaluation team prepared country case studies and component working papers, which went through several rounds of comments from the Evaluation Office (for quality assurance) and UNEP stakeholders (for fact checking). The main report was drafted by November 2013, but also required a series of reviews by the Evaluation Office and subsequent revisions, so that it was shared within UNEP for comments as late as February 2014. Considering that the period covered by the evaluation ended on 31 December 2012, there was a time lag of more than 1 year between much of the information collected for the evaluation and the distribution of its first draft report. During the first half of 2014, comments were received from UNEP staff and data from the UNEP Programme Performance Report 2012–2013 was incorporated where appropriate to make the report as up-to-date as possible. Because the consultants’ team had been disbanded by mid-2014, finalisation of the report was done internally in the Evaluation Office. The report was finally published in January 2015.

6.7 Lessons Learned on the Evaluation Approach

This evaluation has shown the importance of developing an appropriate analytical framework, well suited for the scope and complexity of the object of evaluation. The analytical framework and evaluation approach used for the UNEP Climate Change Sub-programme Evaluation, combining three interlinked areas of focus (strategic relevance, sub-programme performance and factors affecting performance), five concentric units of analysis (UNEP as a whole, sub-programme, component, country and project) and a Theory of Change approach, allowed the evaluation team to cover the standard evaluation criteria in a comprehensive but concise manner, remaining strategic and without drowning in the details.
The ToC approach helped making a credible assessment of UNEP’s contribution towards impact, sustainability and up-scaling, but did not allow this contribution to be quantified. In other words, the evaluation could not determine to what extent higher-level changes beyond stakeholder capacity (direct outcomes), such as changes in environmental management practices or greenhouse gas emissions, could be attributed to UNEP’s efforts alone, and which changes might have happened anyway. In any case, a credible attribution of impact at the sub-programme or sub-programme component level would have been impossible without extensive impact assessments at the country or project level, which are currently not available in UNEP and could not have been realistically built into the sub-programme evaluation framework.
There appears to be a trade-off between the time that is invested in quality assurance and stakeholder involvement during the evaluation process, on the one hand, and the up-to-dateness of information provided and sustained stakeholder interest in the evaluation, on the other. Strong internal stakeholder involvement during the inception and data collection and analysis phases of the evaluation through interviews, discussions, surveys and commenting on intermediate products, boosted learning within UNEP during the evaluation process. However, the length of the evaluation process, due in part to the high quality standards applied by the Evaluation Office and the time required for receiving stakeholder comments on all evaluation products, created an important time lag between the data collection phase and the distribution of the draft main report. This had two consequences: information presented in the draft main report was more than 1 year old, and internal stakeholder interest for the main report, when it was finally shared within UNEP, appeared to be a lot less than it had been for the intermediate evaluation products.
The evaluation team decided to cover the cross-cutting Science and Outreach component as part of the three other components and not separately, as an acceptable way of dealing with the human resource and time constraints within the team. This was fine in principle, but as a result, some high visibility assessment products developed jointly by different units in UNEP under this component were not included in the project sample, and received therefore an only cursory treatment in the report. This undervalued some important UNEP-wide efforts and was also a missed opportunity in terms of learning lessons from cross-divisional collaboration. While it might not have been necessary to give the assessment of the Science and Outreach component the same level of depth as was given to the others, one or two projects from this component should have been included in the project sample.
As acknowledged in the evaluation report7 under the section presenting the limitations of the evaluation, the size of the sample of the country case studies (six in total – or only one for most regions) was too small. Despite the logical and practical country selection criteria, this sample could not provide a representative and credible picture of UNEP’s strategic relevance and performance at the country level. A larger sample size would, however, not have been possible within budget. An alternative approach could have been to base the country case studies on information collected from a country questionnaire sent over email, more in-depth desk review and interviews via Skype or video-link. A rough cost comparison with the actual approach suggests that about four additional country case studies could have been prepared using this alternative approach, bringing the sample to a more representative two case studies per region.
As also noted in the evaluation report, the evaluation would have benefited from more interviews with global partners and key informants outside UNEP with a good understanding of the global climate change arena. These would have increased diversity and credibility of views expressed in the evaluation and, possibly, generated more strategic recommendations. With hindsight, though some interesting views from partners were collected, the perception survey was not the most appropriate tool to usefully explore these views and to tap partners’ ideas on how UNEP’s relevance and results could be enhanced. Alternatively, the evaluation team could have conducted a series of well-facilitated focus group discussions or a Delphi exercise with key resource persons. These could have yielded more credible findings but would have required additional resources.
Open Access This chapter is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://​creativecommons.​org/​licenses/​by-nc/​4.​0/​), which permits any noncommercial use, duplication, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the work’s Creative Commons license, unless indicated otherwise in the credit line; if such material is not included in the work’s Creative Commons license and the respective action is not permitted by statutory regulation, users will need to obtain permission from the license holder to duplicate, adapt or reproduce the material.
Fußnoten
1
UNEP 2010, Climate Change Strategy for the UNEP Programme of Work 2010–2011. Web link: http://​www.​unep.​org/​pdf/​UNEP_​CC_​STRATEGY_​web.​pdf
 
2
UNEP 2009, United Nations Environment Programme Medium-term Strategy 2010–2013: Environment for Development. Web link: http://​www.​unep.​org/​PDF/​FinalMTSGCSS-X-8.​pdf
 
4
UNEG 2012, Professional peer review of the evaluation function, United Nations Environment Programme. Web link: www.​uneval.​org/​document/​download/​1527
 
5
UNEP Evaluation Office 2011, 2010–2011 Evaluation Synthesis Report, pp. 54–60. Web link: http://​www.​unep.​org/​eou/​Portals/​52/​Reports/​2010-2011_​Synthesis%20​Rpt(E).​pdf
 
6
GEF Evaluation Office 2009, Fourth overall performance study of the GEF: The ROtI Handbook: Towards Enhancing the Impacts of Environmental Projects, Methodological Paper #2. Web link: https://​www.​thegef.​org/​gef/​sites/​thegef.​org/​files/​documents/​M2_​ROtI%20​Handbook.​pdf
 
7
UNEP Evaluation Office 2014/2015, Evaluation of the UNEP Sub-programme on Climate Change: Final report. Web link: http://​www.​unep.​org/​eou/​Portals/​52/​SPE%20​Climate%20​Change.​pdf
 
Metadaten
Titel
An Analytical Framework for Evaluating a Diverse Climate Change Portfolio
verfasst von
Michael Carbon
Copyright-Jahr
2017
DOI
https://doi.org/10.1007/978-3-319-43702-6_6