Keywords

Introduction

The introduction of “evidence-based medicine” in the 1980s sparked a revolution in medical practice and training when carefully designed controlled-trials were used to assess the efficacy of existing medical practices. En masse, the studies began producing results that were at odds with years of practice; practice that was largely based on tradition, anecdotes, bias and other psychological “traps”.

The new model was met with resistance and hostility by the medical community as it undermined leadership and authority, and often contracted training, knowledge and firm beliefs of the establishment. It challenged the long-held tradition of “eminence-based medicine”, where a doctor’s prominence in all matters was assumed, whose advice was never questioned, and whose practice was considered an art form as much as a science. The clashes in the field were significant: evidence at odds with instinct proved to be a tough pill to swallow.

Higher education has long been in the domain of “eminence-based practice”, where classically trained professors, elite institutions and historical reverence guide most practices inside and outside of the classroom. Education is largely considered an art; with interpersonal actions in the classroom evoking sentiments of a mystical “black box” of teaching and learning upon which no outsider should tread.

Hence, the recent introduction of transparency tools based on evaluation and assessment science into educational practices is as unpalatable to the higher education community today as it was to the medical community 40 years ago. Similarly, unpleasant to the educational community is the notion that business practices are present in higher education and that there are producers, providers, and consumers: that there is, in fact, a “market for learning”, and that the traditional “sage on the stage” has decreasing value in this market. In one of the oldest pillars of society, it is understandable that these changes create fear, scepticism and opposition.

Nonetheless, it was in the difficult days of the medical practice paradigm shift when a leading researcher responded most pointedly to criticisms of evidence-based practice: “when patients start complaining about the objective of evidence-based medicine, then one should take the criticisms seriously, until then, consider it vested interests playing out” (Freakonomics 2017). In the realm of higher education, we must keep student learning as the focus of all activities, and ensure that student needs trump all vested interests.

Accountability, Quality Assurance and Learning Outcomes

Through the Bologna Process, the EU aimed to develop the European Higher Education Area (EHEA) by 2010; progress toward this goal included initiatives supporting broad agreements on learning outcomes, increasing standardisation of curriculum for the purposes of comparability, and devising common methods for reporting on skills, and competencies acquired through studies. In “Beyond 2020”, a taking stock report of the Bologna Process reconfirmed these goals and highlighted the strong accomplishments of the Process and reiterated the goals of transparency to foster competitiveness and attractiveness of Europe and to support student mobility and employability (Benelux Bologna Secretariat 2009).

Accountability and quality assurance are highlighted in the 2020 report as playing an emerging role in supporting the quality of teaching and learning as well as “providing information about quality and standards as well as an objective and developmental commentary” (Benelux Bologna Secretariat 2009: 6), and in 2012, Hazelkorn unpacked accountability and quality assurance as instruments of transparency. Ultimately, quality assurance agencies (QAA’s) are responsible for supporting transparency by providing reliable and comparable information on educational quality. They hold the responsibility to ensure students are receiving quality education and that the institutions are operating to expected standards.

In a 2014 study, the European Association for Quality Assurance in Higher Education (ENQA) examined the role of quality assurance reports as tools of transparency (Bach et al. 2014). Through a survey of QAA’s, it explored the “usability” of the documents for students, employers and higher education institutions. One survey question asked what information provided in QAA reports was needed by the different groups (i.e. students, employers, etc.) in order to make decisions. 16 possible options were presented including: Content of study programmes, Accreditation status, Strategic planning, Internal Quality Assurance system, Qualifications of teaching staff, Student support system, Number of grants/publication/citations, Reputation of teaching staff, Employability/employment of graduates, Application and admission standards, Condition of infrastructure, Ability to respond to diverse student needs, History and tradition, Financial resources, Institutions position in league tables, and Other.

Note that not one relates to education quality. A possible reason it was not included as an option is because educational quality is not explicitly captured in the ENQA reports. In fact, despite the goal of supporting, improving and increasing transparency of educational quality, quality assurance systems are still struggling to be able to capture it fairly (Dill 2014; Krzykowski and Kinser 2014; Lennon 2016).

Teichler and Shomburg (2013) suggest that incorporating learning outcomes into accountability regimes provides quantifiable information on the quality of education. Learning outcomes activities aim to provide clearly articulated expectations of student knowledge, skills and abilities, with associated demonstrations of achievement that can be used as indicators of educational quality. They can be understood externally, used comparatively, and are appropriate for accountability and quality assurance frameworks.

Coates provides an analysis of the challenges and opportunities for transparency in higher education in his 2016 book “The Market for Learning”, and he speaks particularly to the importance of finding valuable indicators of quality. The challenge is seen most clearly in the struggles of learning outcomes as a suitable means to provide transparent information on educational quality. While learning outcomes have been touted to make educational quality transparent, the realities of their effectiveness as a transparency tool has not been verified.

The next section of this paper reviews the way in which the literature proposes the usefulness of learning outcomes, and the proceeding section examines the activities of the European Quality Assurance agencies as they integrate learning outcomes into their systems. The purpose is to understand what information learning outcomes are currently providing on educational quality.

Proposed Benefits of Learning Outcomes

When defined, learning outcomes support a clear understanding of educational outcomes to students, employers and the public at large. They create a transparency of what occurs in classrooms into identifiable capacities for students. Learning outcomes remove the “black box” of education by clarifying exactly what skills will be gained (Hattie 2009a). For students, they provide information about educational pathways by describing the key elements of a program or credential, which enhances the ability to make sound educational decisions (Banta and Blaich 2010). Telling potential students what skills they will achieve upon graduation allows them to make informed choices about their educational options. Bouder (2003; in Young 2003) suggests that established learning outcomes can be instruments of transparent communication, providing students with a map of various credential options and where they may lead.

Learning outcomes can also support system level coordination. In many cases, language used in sectors differs—be it colleges or universities, or between sectors and disciplines—where a range of terms are used to describe similar concepts or, alternatively, the same word can be used for different purposes. Simply finding a common language to articulate student capacities and then identifying the level of mastery expected in each credential enhances understanding of the intentions of the sectors and helps to find precisely where the similarities and differences lie (Lennon et al. 2014; Lokhoff et al. 2010).

When learning outcomes are mapped and embedded, it allows for improved coordination in student progression, credit transfer and articulation agreements (Allais 2010). This is possible because learning outcomes more readily “translate the aims of a course or programme of study into a set of competencies” making it easier to give credit for learning acquired in another institution, removing barriers to student mobility, and supporting lifelong learning (Roberts 2008: 4). When established, learning outcomes provide confidence that the student has achieved certain expectations, thus lowering the need for blind trust in other educational providers. Similarly, they help to protect students from rogue providers in increasingly complex systems (Middlehurst 2011).

Transparent learning outcomes also support employability. When students are knowledgeable about their capacities, they can recognise the applicability and transferability of their credential to a variety of employment options and also possess the language to describe their skills (European Commission 2014). Learning outcomes support employers in their hiring processes, as they can consider the types of skills they require an employee to possess and identify the corresponding credential or program (Allais 2010). This is particularly valuable given the variety of educational credentials available; what Allais refers to as the “jungle of qualifications” (2010: 49).

This transparency can then be used to inform both education and the labour market about mismatches. There is considerable discussion about a disconnection between the abilities of students and the needs of employers (see, for example, Allen and de Weert 2007; Lennon 2010; Miner 2010). Whether these concerns are founded or not (Handel 2003), and whether or not one believes that it is the role of education to prepare students for the labour market, articulating the abilities graduates do have is a way to begin conversations to identify gaps in expectations (Roberts 2008).

When learning outcomes are established, it is also possible to internationally coordinate and compare educational programming (Lokhoff et al. 2010; Wagenaar 2013). It is particularly useful in cases where the system design of credential offerings, nomenclature, length and institutional types differ. Coordination, either through restructuring (as in the case of the EU), or in identifying compatibility (as in the case of Canada) supports the integration and coordination of national systems in order to “improve the transparency, access, progression and quality of qualifications in relation to the labour market and civil society” (European Commission 2008: 11).

Hence, the literature suggests that learning outcomes demystify the processes and outcomes of education to the benefit of the student, program, institution, national governments and international community. The logical end to this is that it will both improve educational quality and support national economies. Clarifying what is expected of graduates, ensuring programs provide the opportunities to gain the skills, and then measuring and demonstrating success—of both students and programs—is expected to significantly impact education systems and nations (Allais 2007, 2010; Allais et al. 2009a, b; Young 2007).

Study Methodology

This paper presents select data from a broader study of international quality assurance agencies activities in learning outcomes.Footnote 1 The study employed a survey, case studies and meta-evaluation to collect information on policy activities and outcomes. In 2015, members of the International Network for Quality Assurance Agencies in Higher Education (INQAAHE) and the Council for Higher Education Accreditation International Quality Group (CIQG) were invited to participate in a short survey. The respondents were geographically diverse coming from 43 different countries around the world (N = 65) with the majority originating in Europe (31%), Asia (21%) and North America (19%). The intention of the survey was to collect information on the state of learning outcomes policies and activities on an international scale and uncover policy goals, activities, and any evaluations that may have occurred.

The second phase of research, identified agencies that had conducted evaluations of their learning outcomes policies or activities and examined the results of their evaluations as case studies. Nine organisations cases were available for analysis and were coded to determine their structural and policy choices in learning outcomes initiatives and whether the evaluation determines positive, negative or neutral results.

In the third phase of research, those nine research evaluations (“cases”) were considered through a meta-evaluation. A meta-evaluation is a process by which findings from existing evaluations are pooled (Pawson and Tilley 1997; Rossi et al. 2004). The meta-evaluation was applied to the case study findings in order to distil common patterns of impact (positive, neutral/undetermined or negative).

This report focuses on the European participants and cases of the study. The purpose is to distil unique characteristics of the European quality assurance experiences that might support better policy development in the region.

European Experience with Learning Outcomes Policies

Survey Results

The 20 European agencies that completed the survey came from Belgium, Croatia, Cyprus, France, Germany. Kosovo, Latvia, the Netherlands, Portugal, Slovenia, and the United Kingdom. Under the European Higher Education Area umbrella that employs the European Qualifications Framework, 55% of the agencies indicate they operate under a National Qualifications Framework (NQF), and the rest were still under development (The global average of jurisdictions having a National Qualification Framework was 33%.). Yet, only 25% of the European agencies use the NQF as a tool for learning outcomes in the institutions, where another 25% are developing their own internal learning outcomes within the agency, and 40% ask the institutions to develop their own statements. This is in line with the global patterns.

When asked about their goals for learning outcomes policies, the European members were more focused on labour market alignment and transparency than their global counterparts and were less focused on the teaching and learning aspects of learning outcomes (see Fig. 1). When dealing with the institutions under their jurisdiction, over 40% of the European QAA’s require evidence of student achievement, compared to 28% for the rest of the world. 25% of the European agencies use standard assessments, and another 15% use classroom-based assessments as evidence. This is generally in line with the global pattern, however, the other QAA’s are more focused on the use of e-portfolios and badges. This is likely due to the existence of Europe-wide Europass, which serves to make credentials transparent to employers.

Fig. 1
figure 1

Goals for learning outcomes policies/strategies

When asked if they undertake evaluations of their learning outcomes policies and strategies, 45% of the European countries indicated that they are actively engaged in this type of work. Globally, the bulk of evaluations are focussed on the institutional impact of learning outcomes, with surprisingly few considering the financial implications of the activities (see Fig. 2). Within the European countries, nearly all the evaluations focus on the institutional level, with 33% of the research focused on institutional impact, and 22% focused on institutional activities and interviews with administration, faculty, and staff respectively. This focus on the institutional level is interesting considering that the stated goals of the labour market alignment and transparency were more common.

Fig. 2
figure 2

Regional evaluation activities

Meta-Evaluation Results

Eight European countries had reports/publications that evaluated cases of learning outcomes strategies that were evaluated as case studies and then pooled into a meta-evaluation in this research. The reports include the “Evaluation of the academic infrastructure: Final report” by the UK’s Quality Assurance Agency (QAA 2010) which separately examines the system level frameworks for qualification (QAA FHEQ), subject benchmark statements (QAA SBS), and program specific statements (QAA PS). Each is examined separately in this analysis. The report “Learning outcomes in external quality assurance approaches: Investigating and discussing Nordic practices and developments” by the Nordic Quality Assurance Network in Higher Education provides case studies of Denmark, Norway, Finland, and Sweden (Hansen et al. 2013), which are addressed separately here. Furthermore, because the reports examined quality assurance activities across entire countries (some of which have more than one agency), the cases are presented as NOQA Denmark, NOQA Norway, NOQA Finland, and NOQA Sweden. Finally, an unpublished document from the Foundation of Higher Education Quality Evaluation Centre (AIKNC) provides a report on the Latvian experiences of introducing learning outcomes into their higher education sectors (Dzelme, nd),

For a comprehensive analysis of the case studies and analysis of these learning outcomes evaluations see “In Search of Quality: Evaluating the Impact of Learning Outcomes Policies in Higher Education Regulation” (Lennon 2016).

A review of the eight research cases finds a wide range of organisational and policy features. Table 1 below shows the number of cases that discussed each factor as a goal or element of their policy. The characteristics are divided into two sections: structural features and policy choices. The structural features are fixed organisational factors. The policy choices are the targets and are the areas where it is possible to evaluate success.

Table 1 Characteristics of case studies

There are some characteristics that may have an influence on the findings but are not possible to be included as factors for analysis. For example, most of the policies have existed for less than 10 years (the exception being UK QAA which has had policies in place since 1997), and it is difficult to ascertain policy implications and impacts of educational policies as change is often invisible, incremental, and slow (Kis 2005: 26). Also, two of the research evaluations were formative and six were summative. While literature suggests summative research provides better insight into impact, formative research can provide valuable information particularly when policies have not matured (Sursock 2011). Of the eight research evaluations, all but one of the agencies worked with credential-level expectations, and seven focused on the program level. These foci are reflected in the level of expectation targeted, where six focused on learning outcomes at the national/jurisdictional level and eight worked on learning outcomes at the program level. The goals of the policies varied, though a third aimed to support “institutional improvement/quality”.

As the research was conducted by regulatory agencies on their own policies, it is understandable that seven studies discussed the impact on quality assurance as an “actor”. The one case that did not discuss the factor was the QAA PS which is a program-level learning outcomes policy (though embedded in a broader policy). The range of target audiences demonstrates the variety of ways learning outcomes are intended to support success. It is also shown that the majority of research has been conducted on policies that have worked to articulate and implement learning outcomes rather than measure them.

Impact of Learning Outcomes Policies

Trying to understand if the policies were successful at achieving the impact they intended, the case studies were coded to determine if a certain feature was a target and then identified whether that target was positively or negatively impacted, or if the implications were neutral. This section first examines the overall success of the policies. The “success” of the policies is considered through the relative number of positive, neutral and negative implications. The section then moves on to observe if, and how factors were impacted by learning outcomes policies and the relative success of policies containing that factor.

Figure 3 shows the implications of learning outcomes policies in each research case. Of the eight cases, QAA SBS and NOQA Denmark had more positive implications where AINKC, NOQA Finland and NOQA Sweden had a neutral impact on over half of their targets. Summed across all cases, “neutral or undetermined” was the most frequent result of the policy evaluations (n = 35).

Fig. 3
figure 3

Impact of learning outcomes policies

The (N) associated with each case represents the number of policy targets evaluated in the research case ranging from 6 to 18. A correlational analysis was used to examine the relationship between the number of targets and the number of positive, neutral and negative implications. The results indicate a positive but weak correlation between the number of targets and number of positive implications (Spearman’s Rho 0.69), but no relationship with the number of negative or neutral implications.

This finding suggests that the more targets a policy was intending to impact the more likely the policy were to be successful. The NOQA Denmark policy, for example, positively impacted 55% of the targets. This is somewhat at odds with the policy evaluation literature that suggests that a limited number of clear and focused targets produces better results (Rossi et al. 2004). One possible explanation for this finding is that the research cases were more likely to report on the positive findings than report on negative or neutral impacts. Another explanation is that it could simply be that with the higher number of targets the greater likelihood to achieve at least one, even if the overall success rate relatively low.

Impact of Structural Features

In order to understand if structural factors impacted the success of policies, an examination was conducted on the relative proportion of implication types reported for cases (N) that had that feature.

The type of expectation in the learning outcomes policy is a structural component that is not embedded within the policy but is established through part of the organisational mandate and/or overriding policies (such as the EQF-LL). Figure 4 shows the proportional impact of the level of expectation associated with the policies—ranging from those that run across international boundaries to those that are very specifically targeting a student in one course. It shows that, when combined, the five cases that targeted students (either across courses or within courses) had positive implications in over 40% of the factors addressed. NOQA Denmark and Sweden, for example, targeted students both in and across courses and were successful in achieving over 40% of all its targets. Alternatively, the six that focused on program level outcomes had a positive impact in 30% of the cases.

Fig. 4
figure 4

Proportional impact by level of expectation

A similar examination of the data on the focus of the expectation found the negligible influence of the focus of the learning outcomes in the policy: whether generic, at the credential or program level. This finding is somewhat unexpected as literature has suggested that focusing on generic skills is difficult and that targeting discipline specific learning outcomes can be successful (Benjamin 2013; Lennon and Frank 2014; Lennon and Jonker 2014; Tremblay et al. 2012).

Impact of Policy Choices

In order to understand where the policies had an impact, this section examines the number and type of implications reported on each policy choice, as well as the proportional impact of policies that targeted that feature. Figure 5, for example, shows the number of instances where the goal of the policy was influenced in a positive, neutral or undetermined, or negative way. It shows that all of the policies that targeted teaching and learning (N = 3) were evaluated to have had a positive impact on teaching and learning. Alternatively, of those that targeted learning outcomes policies towards improving system design, three had negative results and one was neutral. Examining the results in this way provides insight into goal areas where policies may have more promise.

Fig. 5
figure 5

Impact of learning outcomes policy on goals

Another way to examine the impact of policy choice is to see how successful policies containing that policy choice were overall. Figure 6 shows that those four policies targeted at teaching and learning achieved a positive impact in 50% of cases in all of the features they targeted. Alternatively, the policies that focused on international coordination and comparison were successful in 31% of their targets. Together, Figs. 5 and 6 suggest that those policies targeted at teaching and learning were overall more successful at both improving teaching and learning and producing successful results overall. These findings correspond to existing literature that has found a direct effect of learning outcomes on student success (Hattie 2009a, b). In two cases, the policies that targeted transparency were successful in impacting their targets, while in two they were not, and the third case was unable to determine any impact.

Fig. 6
figure 6

Proportion of total impact by goal

Positive implications of learning outcomes on transparency were found in the QAA Frameworks of Higher Education (FHEQ), as they achieved a clarification of the structure and nomenclature of awards that are available. NOQA Denmark specifically examined the use of learning outcomes as a transparency tool. Learning outcomes were found useful for students both as a signal of what they should expect from their program and also in protecting students, as the activities identified programs that are not meeting standards. Also, “the method has succeeded in identifying programmes where the learning outcomes were not sufficiently supported by the structure and content of the programme or by sufficient resources at the institution” (Hansen et al. 2013: 21). Findings on the value of learning outcomes as a tool for employers and the labour market were less positive, suggesting that there were significant challenges in mapping programs to the labour market. This led to the suggestion that there should be more employer engagement in the assignment and assessment of required learning outcomes.

The one case that identified a negative implication for transparency was the QAA Program Specific Statements (PS) where the primary goal of articulating learning outcomes was to support student and employer understanding of the courses. The evaluation determined that “in many cases programme specifications were not considered to be the most effective way for providing information for students or for employers (QAA 2010: 7). The conclusion was that the information students required to inform choice was available in other forms that were more appropriate, more user-friendly and less expensive.

The Latvian Foundation Higher Education Quality Evaluation Centre (AIKNC) did not report any positive or negative impacts on transparency. The program under evaluation was trying to link their program and institutional learning outcomes to the European Qualifications Framework for Lifelong Learning (EQF-LL) and the Qualification Framework for the European Higher Education Area (QF-EHEA). The findings of the formative research suggested that they were having difficulties in developing transparent learning outcomes that support the entire system because of the lack of cohesion between their national frameworks between vocational education and training and higher education. The author was positive about the potential of learning outcomes but did not comment on the impact on transparency.

Moving on, an examination of the impact on the target audience of the policy finds fairly neutral results: where there were a few cases that supported institutional accountability, there were more cases that found negative implications on program curriculum development (see Fig. 7). Overall, the research cases study analysis identified 10 instances of negative impact on the policy target, six neutral and nine positive. This suggests there is a limited positive impact on the target audience of the policy. A proportional evaluation also identified the negligible influence of target audience on overall success.

Fig. 7
figure 7

Impact on the target audience

Summary

This paper presents the results of a survey, case studies and meta-evaluation of the existing research on learning outcomes policies in European higher education regulation. Using evidence collected from eight research evaluations, the analysis presented examples of characteristics, structural features and policy choices.

Nearly every possible type of goal, strategies and policy choice found in the literature (as discussed above) were found in the eight cases: the policies under examination were designed to target different levels, focuses, and audiences. The goals of the policies varied, though five of the eight policies aimed to support “transparency”. This suggests that, within the sample of research cases, there was no pattern in what quality assurance policies are intending to do or how they are doing it.

Examining the overall research results has also shown the policies have been relatively unsuccessful in positively impacting their chosen targets, more often providing no change or a negative outcome on the intended objective. How the presence of each target impacted the overall success of a policy was also considered, and again, the results were largely negligible. While the policies that targeted teaching and learning seemed to achieve a positive impact more frequently, at best they were successful in 50% of the time in all of the features they targeted and statistically there was no difference. Overall results found that no policy choice had a higher success rate on either the target or the policy intention overall.

Summed across all cases, “neutral or undetermined” was the most frequent result of the policy evaluations. There are no significant or obvious patterns in how learning outcomes policies are impacting their targeted goal, or if there are certain policy choices that are producing more favourable impacts. Overall, the combined results of the evaluations suggest that learning outcomes policies are not having their intended impact, or at least they have not yet been found to have the positive outcomes desired.

Implications

The results of the research findings unveil that policies on learning outcomes in higher education regulation are not having the intended impact. This is a significant finding considering the amount of time, effort and political will being put into learning outcomes policies as a result of the Bologna process within national governments, quality assurance agencies as well as the institutions. At each of these levels, there is continued investment into articulating learning outcomes, implementing them into programming, and demonstrating that they have been achieved. The true financial cost of these activities is not easily ascertained and, to date, there is no publically available information or research on the topic; however, this study suggests that the impact of these activities is limited. The finding calls into question the value of learning outcomes as a means to contribute to higher education quality, regulation and transparency.

Yet, before discarding the entire field of learning outcomes as a concept, it is more practical to first consider that the failure is a policy issue. There is a simple policy cycle by which any policy is formulated, implemented and evaluated (Cerych and Sabatier 1986; Coates and Lennon 2014; Inwood 2004). Findings from this research have identified issues with learning outcomes policies at each of the three stages, where policies were misdirected in concept in formulation, misapplied in implementation, or misaligned in the planned activities and evaluation.

  • Policies are misdirected

One possible explanation for the relative neutrality of the policy impact is that the goals and expectations were misdirected: that there was a fundamental disconnect between the desired and the possible outcomes such that the goals could never be achieved through the policy. Figure 5, for example, illustrates that learning outcomes policies targeted at improving System Design and Credit Transfer were never successful. Being one of the three most common goals, this finding suggests that there may be issues with the concept behind the goal choice.

The case studies shed some light on how goal choice may influence the success rates. The difference in the success of QAA SBS and QAA FHEQ policies provides a clear example of how two policies with different goals had different outcomes. The policies were similar in many ways: they came from the same jurisdiction, had the same structural features and each had a similar number of policy choices and targets. The FHEQ was established as a qualifications framework with goals to improve “Transparency,” “System design and credit transfer” as well as “International coordination and comparison”. Overall, the policy had less than a 20% success rate. The QAA SBS focused on subject-based issues of the “Teaching and learning”, “Institutional improvement/quality”, “System design and credit transfer”, and “Labour market alignment and economic development”. The policy was successful in positively impacting 80% of its targets.

The different outcomes of the two policies are remarkable, and yet are somewhat consistent with other research. Allais, for example, has contended that national qualifications frameworks are not achieving their system-level goals of improving qualification transparency or credit transfer decision-making (Allais 2010). Hattie, on the other hand, presented a meta-evaluation to show the positive impact of learning outcomes on teaching and learning (2009b).

Hence, although the findings in this research are only descriptive, and there were no statistically significant differences in the types of policy goals, it is reasonable to suggest that the goals of the learning outcomes policies should be seriously considered prior to any planning or implementation. It is vital to ensure goals reflect the reality of what could be reasonably achievable.

  • Policies are misaligned

Literature suggests that any policy should have an established goal, long-term targets, short-term targets, benchmarks, and evaluations appropriate to capture change (Patton 1998; Rossi et al. 2004). There are, of course, variations on this, but the basic cycle is a feedback loop. In fact, it mimics the role of learning outcomes—establish what the expectations are, incorporate them into the programing and measure whether students have gained the expected knowledge, skills and competencies. When one of those elements is misaligned, the cycle cannot work. For example, if learning outcomes are written but not implemented, there will likely be no change in student achievement. Similarly, there is no valuable information gained if the student achievement is measured but expectations and indications of success are not clearly defined.

Furthermore, not only do the right steps have to be taken but the right decisions must be made when designing the policy: the policy choices must be able to lead to the desired outcomes. Examples from the case studies find this is not always happening. For example, NOQA Denmark noted that where the goal was to use learning outcomes as a transparency tool for employers and the labour market, the strategy did not involve employers or develop ways of demonstrating achievement to the labour market (focusing instead on curriculum mapping). In another example, QAA PS failed to achieve the goal of supporting transparency for students and employers, perhaps because it focused on writing program-specific outcomes for curriculum rather than focusing on outward facing activities of demonstrating achievement through something like an e-portfolio or learning passport.

  • Policies are misapplied

Even the best-planned policy can be misapplied and have implementation issues hinder success. For example, the NOQA Denmark research found it was a challenge for institutions to integrate and map the learning outcomes, particularly to the labour market. Similarly, the NOQA Finland case found it was difficult for the programs to develop internal learning outcomes and align their programming with the NQF regulations. Moreover, even the auditors tasked with judging the quality of the learning outcomes in Finland felt unequipped to evaluate the progress or provide constructive feedback.

The case study findings noting the challenges of implementation are corroborated through the global survey of regulatory agencies where the most frequent argument against learning outcomes was that it was a burdensome administrative task and there was a lack of operational support.

Concluding Thoughts

Reflecting again on the parallel experience of medicine nearly forty years ago, the goal of evidence-based practice is reasonable. Today, students, parents, employers, and the public at large seek transparent measures of educational quality. The need for fair evaluations of educational quality is justifiable, and the assumption that learning outcomes are a means to do so is understandable. The belief that clarifying educational expectations supports the achievement of those expectations is logical.

This study presents an initial foray into the evaluation of the learning outcomes policies in Europe. While only establishing a baseline using eight cases, it suggests that at this time there are no clear promising practices in learning outcomes policies for transparency or any other goal.

The primary discovery of this study is that learning outcomes policies in regulatory agencies are not having the intended effect because the policies themselves had issues in development, implementation or in the alignment of goals and activities. Thus, rather than dismissing the concept of learning outcomes as ineffective, it is more reasonable to consider learning outcomes as a policy problem, where there needs to be significantly more work done on evaluating learning outcomes strategies and activities. To truly understand the value of learning outcomes, it is necessary to continue to establish what role learning outcomes can play in providing transparent information on educational quality as well as the cost-benefit ratio of the activities. Thus, there is a good deal of work to be done before we are able to provide “evidence-based education”, but the ground is beginning to shift.

As witnessed in medicine, it is not a simple transition to evidence-based practice, either in shifting the conceptual boundaries or in the trial and error of ad hoc activities. In the case of learning outcomes, it may take some time before there are demonstrable promising practices established through systematic evaluations.