Following these disappointing results, the Commission and other EU bodies have experimented with other potential mechanisms—including Thematic Strategies, a new “Lisbon Strategy,” and a new “impact assessment” procedure—for EPI in the 2000s. By and large, however, these efforts have been hampered by two key features. First, most of these efforts have been “soft” along the dimensions specified above, which has limited their impact in sectoral DGs and Council formations. Second, these various procedures have forced environmental officials to participate in cross-sectoral groups and networks in which the environment is not the primary policy consideration. Indeed, many scholars have identified a shift over the past decade, in which cross-cutting mandates increasingly emphasize economic competitiveness as the overriding policy concern, relegating environmental and other considerations to secondary status.
4.4.1 The Sixth EAP and the Thematic Strategies
The decade between the fifth EAP, adopted in 1992, and the sixth, adopted in 2002, had witnessed a sustained effort by the Commission to promote EPI, only to see both its internal reforms and the Cardiff process falter. The sixth EAP, in this context, retained EPI as an ambition (European Union
2002), but proposed to pursue that ambition through yet another, “radically different” (Wilkinson
2007: 5) form, namely the creation of a series of “Thematic Strategies” (TSs) that would bring together policy makers across multiple issue-areas, in an attempt to formulate coherent, integrated policies with respect to seven cross-cutting issues: air pollution, waste prevention and recycling, the marine environment, soil, pesticides, natural resources, and the urban environment.
In principle, Ingmar von Homeyer (
2007: 8) points out in an excellent study, the Thematic Strategies might be expected to offer an ideal forum for environmental policy integration. And indeed, the strategies, elaborated between 2002 and 2006, were formulated in large part through inter-service committees and working groups, and following extensive consultation with member-state officials, industry, and other civil-society groups (Homeyer
2007: 16). And yet, Homeyer and other scholars find, the process of extensive inter-service and civil-society consultation resulted in each case in a very substantial weakening of the Commission’s original proposals, which in all but one case (air pollution) lacked the specific targets called for in the 6th EAP (Homeyer
2007: 8, 18). Indeed, the various Thematic Strategies employ mostly soft-law, “new governance” instruments, with few proposals for binding regulations. Of the seven TSs, five called only for “framework” directives, which provide member states with unusually great leeway in implementation, and two others called for no legally binding legislation at all (Homeyer
2007: 10).
In light of this evidence, Wilkinson (
2007: 3) has claimed that the disappointing terms and implementation of the TSs “derive from the very logic of EPI, which ultimately foresees a sharing of responsibility for the development of environment-related policies with non-environmental actors.” In fact, we would suggest, the Thematic Strategies differ fundamentally from the Commission’s original EPI mandate, and indeed from the other cross-cutting mandates considered in this article, all of which place what Lafferty and Hovden (
2003: 9) call “principled priority” on a particular policy concern, such as the environment, which other sectoral DGs are expected to take on board. By contrast, the inter-service groups formed under the Thematic Strategies placed no such priority on the environment per se, reducing DG Environment to one among many DGs, forced to defend environmental considerations against competing sectoral concerns. Indeed, this was to be the pattern for other initiatives later in the decade, with similar results.
4.4.3 Impact Assessment
We have already alluded to the use of strategic environmental assessments as potential tools for EPI, noting that the use of these tools in preparing Commission proposals had generally been voluntary and limited. In recent years, however, the Commission has embraced a broader cross-cutting mandate for “integrated” impact assessment (IA), in which significant policy proposals would be subject to prior assessment across a range of economic, social and environmental criteria. In the context of this study, IA represents a comparatively, and increasingly, “hard” instrument for mainstreaming multiple criteria in all EU policies—although the specific place of environmental considerations in IA, as in the Lisbon Strategy, remains unclear.
The lineage of impact assessment can be traced in part to early use of environmental impact assessment in the United States and Europe, but also to the requirement, first imposed by the Reagan Administration, that US regulatory agencies undertake Regulatory Impact Assessments (RIAs) to weigh the costs and benefits of proposed new regulations. By the mid-1990s, this OECD had established guidelines for best practice in impact assessment, and by the 2000s every EU member state had put in place some version of impact assessment.
11
The push for integrated impact assessment at the EU level took place in the early 2000s, shaped both by the 2001 Sustainable Development Strategy and by the Commission’s “Better Regulation” agenda, adopted in response to criticisms of over-regulation from Brussels (Franz and Kirkpatrick
2007: 145). As laid out in the Commission’s 2002 communication on impact assessment, the new IA procedures were intended to replace all previous single-issue assessments and assess the potential economic, social and environmental impacts of significant proposed legislation, regulations, and policy proposals. Following the SDS, impact assessments were expected to be “balanced” among the three fields (a point to which we shall return presently). In terms of coverage, the new IA system would be mandatory for all items mentioned in the Commission’s annual legislative work program as well as for other potential high-impact proposals and regulations to be identified by the Commission (Commission
2002).
In terms of the actual procedure, the Commission’s original 2002 framework provided that the lead DG on any given proposal should also take the lead in preparing a “preliminary” IA, possibly with the support of an inter-service support group. This preliminary IA would then be submitted to the full College of Commissioners, which could ask for an “extended” IA for proposals likely to have major economic, social and/or environmental impacts.
12 Commission DGs would be guided in the conduct of IAs by a series of Guidelines, published in 2002 and revised in 2005, 2006, and 2009.
The objectives of the new impact assessment were multiple (Bäcklund
2009), including,
inter alia: improving the quality of EU regulation by providing precise analyses of the direct and indirect impacts of proposed policies as well as of alternative policy options; improving inter-departmental coordination within the Commission; and increasing the transparency and legitimacy of EU regulation by incorporating multiple stakeholders and making all impact assessments publicly available on the Commission’s impact assessment website.
13 From our perspective in this article, the institutional rules for impact assessment were “harder” than the Commission’s earlier EPI mandate, since IAs were mandatory for at least the major proposals included in the Commission’s annual work program, and no longer left to the discretion of the various DGs and services. Nevertheless, the IA system established in 2002 was still “soft” along a number of dimensions. At least initially, the lead DG for a given proposal was assigned the lead role in preparing IAs, with only weak provisions on inter-service coordination, which remained optional. Furthermore, the initial IA Guidelines lacked precise instructions for DGs, implicitly giving wide discretion to the services preparing the assessments. Finally, by contrast with the US system, where the Office of Management and Budget (OMB) provided a centralized and hierarchical quality-control system for RIAs coming from executive agencies, the EU system was more decentralized, with the Commission Secretariat General given the power to establish and revise the IA Guidelines but not to police the quality of assessments (The Evaluation Partnership
2007: 13).
Perhaps not surprisingly in light of these features, early evaluations of EU impact assessments during the first several years (2003–2005) were largely critical, with several content analyses finding that the first IAs following the adoption of the new mandate rarely offered quantitative and monetized assessments, often failed to consider the costs and benefits of alternative approaches, and considered the benefits of proposed policies more consistently than the costs (Wilkinson et al.
2004; Lee and Kirkpatrick
2006; Renda
2006). In 2006, the Commission commissioned an external evaluation of the IA program, which identified a number of weaknesses in both the process and the content of the IAs adopted during the first 3 years of the program, and proposed a number of reforms, including the establishment of “sufficient sanctioning mechanisms” to impose quality controls on the various DGs and services (The Evaluation Partnership
2007: 14).
Indeed, one of the most striking features of EU impact assessment is the extent to which the Commission has repeatedly reformed and “hardened” the process, offering stronger centralized control, clearer guidance, harder incentives, greater accountability, and greater administrative capacity over time. For example, the Commission issued revised Guidelines for impact assessment in 2005, 2006 and 2009, progressively clarifying procedures for the services in preparing IA, including the requirement to establish mandatory Inter-Service Steering Groups (ISSGs) for each assessment.
Perhaps most significantly, the Commission introduced, in November 2006, a new, five-member Impact Assessment Board (IAB), whose members serve in their personal capacity and answer only to the President of the Commission. The primary purpose of the Board, according to its mandate, is to improve the quality of Commission impact assessments, primarily through its review mechanism whereby the Board reviews and comments on draft IAs and may request resubmission of IAs deemed to suffer from serious problems. The Board, as the Commission admitted in its 2010 annual review, does not have the statutory right “to block a proposal from being submitted for political examination because the impact assessment is of insufficient quality.”
The Commission is, however, fully informed about Board opinions. The fact that the Board’s opinions are formally part of Commission decision-making procedures and are published provides an incentive for Commission services to make the improvements to the impact assessments that the Board recommends (Commission
2010: 19).
In practice, the IAB has operated in large part as a force for centralization in which the center-right Commission of President José Manuel Barroso has imposed his better regulation agenda on potentially reluctant Commission services. In the words of one senior Commission official, “these are all Barroso’s people and they can stop anything” (quoted in Peterson
2008: 764)
Concretely, the Board engages in mandatory review of each and every draft impact assessment prepared in the various DGs and services, and it has been increasingly bold in returning draft IAs to the services for revisions—with 37% of all draft IAs thus subject to revision in 2009, and with all Commission services required to explain in their revised IAs how they have incorporated the Board’s recommendations (Commission
2010: 6, 12). The Board has also, in a series of annual reports and revised guidelines, called for greater quantification of costs and benefits, more open consultation with stakeholders, and greater clarity in the presentation of findings in executive summaries. By contrast with the weak or nonexistent coordinating roles of DG Employment and Social Affairs and DG Environment in the gender mainstreaming and EPI cases, respectively, the IAB enjoys strong coordinating powers and brandishes hard incentives over sectoral DGs.
Although the IAB has been in operation for only 3 years at this writing, early evidence from scholarship and from the Commission’s own reports suggests that the practice of impact assessment in the Commission has improved in ways sought by the Board. In its 2010 report, the IAB finds a general improvement in the quality of IAs across the board (Commission
2010: 13). In addition, the Board identifies a number of procedural steps taken within and across DGs “to strengthen the impact assessment culture in services,” including strengthening of IA units and procedures in individual DGs; increased IA training for new and existing Commission officials; improved operation of the ISSGs; and the establishment of an inter-service Impact Assessment Working Group that met three times during 2009 (Commission
2010: 15). Along similar lines, a scholarly assessment of IAs undertaken since the 2005 reforms found a gradual improvement in the quality of IAs, which rivaled that of US RIAs for proposals estimated to have high impacts (Cecot et al.
2008). The reforms of the IA process, and indeed the process itself, have had only a few years to operate, but early indications are that the hardening of the process has already yielded greater efficiency over time.
However, it is less clear that IA has operated as an effective tool of environmental policy integration. Several years after the inception of the IA system, several scholarly analyses criticized the Commission’s 2002 Guidelines, contending that the indicators for economic impact were set down with far greater clarity than those for environmental or social impacts. Content analysis of the early IAs, moreover, found that the analyses were unbalanced, with environmental and social effects analyzed in less detail than economic impacts, or not at all (Wilkinson et al.
2004, IMV 2006). There is, in the view of many authors, a fundamental tension in the EU impact assessment system, which is simultaneously supposed to operationalize the EU’s Sustainable Development Strategy, on the one hand, while at the same time simplifying and reducing the administrative costs of regulation in line with the Lisbon Strategy and the Better Regulation initiative—with the latter reportedly dominating the process during its early years (Ruddy and Hilty
2008: 93).
Moreover, by comparison with general assessments of the IA system, which find improvements over time, recent scholarly studies suggest that the integration of environmental considerations may have actually
worsened over time (Wilkinson
2007). The revised Guidelines, for example, have placed progressively greater emphasis on the economic aspects of impact assessment, while doing comparatively little to strengthen or clarify the rules relating to environmental concerns, leading some scholars to conclude that the reformed IA system systematically privileges jobs and growth over environmental considerations. Franz and Kirkpatrick (
2007: 153, 155), for example, undertook a content analysis of a sample of 13 impact assessments undertaken after the 2005 reforms, and found that economic impact received “high coverage” in 10 out of 13 assessments, while only 5 assessments gave any consideration at all to environmental impact, and only 3 of those in any detail. Environmental sustainability, they conclude, remains “poorly considered” in the post-reform IAs.
In sum, the EU’s commitment to environmental policy integration is one of the longest-running cross-cutting mandates in any international organization, running across several decades and a number of distinct initiatives within and outside the Commission. As with our examination of the gender mainstreaming mandate above, however, most of these efforts have relied on soft incentives, favoring “persuasion rather than power” in the effort to change the behavior of sectoral policy-makers (Wilkinson
1998: 113), the primary exception being the relatively new impact assessment procedure. Indeed, Wilkinson (
2007: 7) argues, “EU environmental policy seems to be retreating increasingly into the realm of soft instruments inspired by the open method of coordination.” Wilkinson leaves open the possibility that “soft” approaches may eventually yield results, calling for more research into the question, yet he concludes that, “unless there are ‘win-win’ opportunities or external pressures that provide incentives to specific ‘sectoral’ DGs to integrate environmental considerations, such new modes of governance are likely to have limited effectiveness” (Wilkinson
2007: 8). In terms of outcomes, finally, it is difficult to disagree with the Commission’s assessment that, “There has been only limited progress with the fundamental issues of integrating environmental concerns into other policy areas” (Commission
2007c: 17), or with the conclusion by Jordan et al.’s (
2008: 172) that, “Whichever way you look at it, the environmental sector has largely failed to get the sectors to ‘own’ (let alone implement) EPI.”
14