Keywords

Introduction

The term transparency tools in a public policy context has many potential meanings (Ball 2009). For the purposes of this chapter, the term is taken to mean policy instruments devised by any stakeholder, whether a minister, policymaker, business person, institution manager, academic, consultant, or journalist, which are designed to make some aspect of practice more visible, susceptible to comparison and open to scrutiny, with a view to possible subsequent beneficial change.

Over the last sixty years, there has been significant growth in the number and variety of transparency tools designed to provide assessments for different stakeholders of the teaching and research performance of university staff and students. This chapter analyses the transparency tools that have been developed over this period in Wales, one of the four constituent nations of the United Kingdom. In this assessment, the following three research questions are addressed.

  1. a.

    Who has developed these transparency tools and why?

  2. b.

    Where and when were these tools developed?

  3. c.

    What were the key features of the transparency tools developed?

Using Michael Barber’s normative model of government approaches to the regulation of public services as a guide (Barber 2015), the following pages answer these three questions by reference to five overlapping phases of recent development. The first of the five phases of development began in England and Wales in the early nineteenth century with the introduction of a system of collegial and professional self-regulation based on a belief in the altruism and trust that could be placed in academics to assure and maintain the equivalence of standards between universities and colleges. The steady expansion of higher education in the UK in the twentieth century and pressure on public finances led, in the second phase of development, to the introduction of government-sponsored audit and assessment of education and research. Implicit in these arrangements were assumptions about a different form of regulation which Barber terms hierarchy and targets. In the 1990s, the continued expansion of higher education student numbers prompted the third phase of development with the creation of national newspaper university league tables and international higher education rankings designed to reflect the growing marketisation of higher education through a system which assumed student and institution choice and competition based on measures of teaching and research activity. In the early 2000s, in the fourth phase of development, a new range of tools for measuring performance appears to have been developed in a less transparent manner for senior managers, government agencies and private funders to measure the privatised economic returns of higher education and research.

As this chapter aims to demonstrate, as each new transparency tool has been introduced, established tools have generally not been removed. Instead, each new tool has added a new layer of control to what has become a very complex system. Meanwhile, despite and on occasion because of the introduction of these tools, problems with what is seen to be the quality of higher education provision have still arisen.

In Wales in recent years, questions have begun to be asked about the connection between established transparency tools, local practice and national policy intention. These questions have provided the backdrop to the commissioning of a series of reviews designed to promote what Barber would term devolution and transparency (Hazelkorn 2015; Diamond 2016; Weingarten 2017).

Collegial and Professional Self-regulation—Trust and Altruism

There are eight universities with their head offices and most of their operations based in Wales: Aberystwyth, Bangor, Cardiff, Cardiff Metropolitan, Swansea, University of South Wales, the University of Wales Trinity St David (UWTSD) and Wrexham (Glyndŵr) University. These universities provide higher education, research, innovation, community engagement, enterprise and other services to the people of Wales and to students and research funders from other parts of the UK and the wider world. In addition, there are sixteen further education colleges and work-based learning providers based in Wales which provide government-funded higher education courses and/or higher-level apprenticeshipsFootnote 1. There are also six other universities and one higher education provider with branch operations or franchise arrangements operating in WalesFootnote 2.

The first higher education institution in Wales was St David’s College, an ecclesiastical college based in Lampeter in West Wales. The College was founded by Royal Charter in 1822 but was not granted its own degree awarding powers until 1852 when it was given the power to award a Bachelor’s degree in Divinity with a subsequent extension of these powers to Bachelors’ degrees in Art in 1865 (Davies and Jones 1905).

Fifty years after the establishment of St David’s College, three university colleges were formed by Royal Charter in Aberystwyth for mid-Wales in 1872, in Cardiff for South Wales and Monmouthshire in 1883, and in Bangor for North Wales in 1885. When first established, these three colleges provided degree courses overseen and awarded through the University of London’s external degree programme which had been established in 1858 (Haldane 1918). Concern about the limitations placed by the University of London’s external degree programme on the development of curricula locally led to the establishment of the federal University of Wales in 1893. This new university took on the role of overseeing the academic standards of degrees awarded in the three colleges and paved the way for further university expansion through the establishment of University College Swansea in 1920; the Welsh National School of Medicine in 1921, the University of Wales Institute of Science and Technology (UWIST) in 1967; and the subsequent inclusion into the University’s federal structure of St David’s College, Lampeter in 1971; Cardiff Institute of Higher Education (CIHE) in 1992 and University of Wales College Newport in 1996.

For most of the first one hundred and sixty years of the work of the higher education providers in Wales (from 1822 to 1982), the UK Government and Welsh local government, like their counterparts in England, Northern Ireland and Scotland, did not seek to regulate directly the activities and standards achieved by staff and students in these institutions (Harvey 2005). Instead, the governance and internal management of these institutions were left to their Councils/Courts and Senates (Shattock 2008).

In England, the establishment of the University of Durham in 1832 led to the development of an external examiner system designed to ensure comparability of the academic standards of students and staff at this new institution with the colleges of Oxford and Cambridge. The use of this quality assurance tool was made more extensive in the mid-1800s through the creation of the University of London external degree programme in 1858 and the subsequent creation of the federal Queen’s University in Ireland in 1845 and the federal Victoria University in the North of England in 1880. Under these arrangements, the equivalence of standards achieved by students in constituent colleges was assured through external examiners appointed by the federal university. This is not to say that these arrangements applied with equal force in all areas. Non-degree qualifications and diplomas were often locally regulated by colleges, as was the adult education provision expanded through the activities of the Workers’ Education Association in Wales from 1903 and the pioneer classes offered by the university colleges, beginning in Wales at Blaenau Ffestiniog in 1908 (Bull 1965; Evans 1953; Lowe 1970).

The importance of collegial self-regulation of teaching activities in Welsh university colleges was affirmed by the Royal Commission on University Education in Wales in 1915. Chaired by Lord Haldane, the Royal Commission was established following two UK Government Treasury reports which had drawn attention to the lack of funds for the planned activities of the University of Wales and the poor coordination of activities between the colleges. The Royal Commission’s final report in 1918 recommended that the University of Wales should assure standards by (a) fixing minimum entry requirements for degree courses; (b) laying down some general regulations for the conditions of degree level study; (c) determining the standard of the final examination for degree by means of external examiners; and (d) undertaking the general supervision and organisation of postgraduate studies. Aside from these general recommendations, it was suggested that the University’s Senate consisting of academic staff should be replaced by a smaller Academic Board and that the University’s Court of prominent local and national stakeholders should be expanded to over 200 members so that it could act as a travelling parliament for the University. The intention behind this proposal was that the future direction of the University should be influenced more strongly by national and local interests not only the academic interests of college staff who, it was noted, were often educated or had worked outside of Wales. In exchange for accepting these reforms, the University of Wales was provided with a supplemental Royal Charter and the additional funds needed to establish the University College Swansea and the National School of Medicine, among other things (Jenkins 1993).

The external examiner system of collegial self-regulation in Wales, like that in England but unlike that in all other European countries, except for Denmark, relied on academics from other institutions agreeing the assessment methods used with the internal examiners and then reviewing the examinations and other work completed by students before reporting to department-based committees to confirm the students’ results and degree classifications (Cuthbert 2003). In many subject areas, this collegial self-regulation was supplemented by a periodic review by professional associations which took a role in the development of common standards and curricula for a range of occupations, most evidently those professions where the role of the association had been recognised through the award of a Royal Charter (Perkin 1990).

The primacy of the role of the external examiner was again affirmed by the Robbins Review of Higher Education in the 1960s (Robbins 1963). In addition, this review recommended the formation of the Council for National Academic Awards (CNAA) to regulate the award of higher education diplomas, undergraduate and postgraduate degrees, as well as doctoral awards in non-university settings. Under these arrangements, institutions wishing to offer these awards were required to apply for initial validation of new degree courses and then to undergo a process of quinquennial review (Silver and Davis 2006). Once established, these arrangements provided for a steady expansion of the number of non-university based higher education students. Meanwhile, the transparency of assessments of higher education teaching and assessment in these newer institutions was made clearer through the role of Her Majesty’s Inspectorate which inspected polytechnics, colleges of higher education and adult education providers, but not universities unless invited (Dunford 1998). In Wales, early beneficiaries of these changes were Glamorgan Polytechnic, Gwent Institute of Higher Education (now USW), North East Wales Institute of Higher Education (now Wrexham Glyndŵr University) and Swansea Institute of Higher Education (now part of UWTSD) (Pratt 1997).

Nearly two hundred years after their initial introduction in the UK, external examiners remain a central feature of the quality assurance arrangements and have been supported in successive Parliamentary sponsored reviews of higher education (Robbins 1963; Dearing 1997; House of Commons 2009). Despite this support, concern about possible inconsistent standards and potentially overly cosy relationships between internal and external examiners has prompted three reviews of these arrangements in recent years (UUK 2011; QAA 2013; HEA 2015). Each of these reports has drawn attention to the lack of consistency in the definition of the role of the external examiner and to differences in the training and development available to people in external examiner roles. In the words of the UUK report:

“to secure widespread confidence the system should be more open [transparent], so that students know what external examiners do, what they say about programmes, and how institutions respond to this” (UUK 2011:6).

What has been less explored in recent years has been the current and potential future role of professional statutory and regulatory bodies (PSRBs). Here the number and function of these organisations have continued to expand. In 2016, the Higher Education Statistics Agency (HESA) listed 178 PSRBs operating across the UK with a remit to oversee and accredit university-based courses of undergraduate students.

Away from the higher education teaching and assessment work of university staff, evaluation of the quality of research and other scholarly activity was undertaken through a system which relied on peer review of research work by academic colleagues who judged research funding applications, book proposals, draft journal articles and completed monographs (Boaz and Ashby 2003). Meanwhile, the quality of academic staff when appointed or promoted to teaching, research or administrative roles was assured using written references from academics at other universities (SCRE 2003).

Audit and Assessment—Hierarchy and Targets

The traditional collegial and professionally orientated approach to quality assurance and the determination of academic standards in UK universities, with its implicit assumptions of trust in academics and belief in the altruism of university leaders and other staff, came under pressure in the 1980s (Enders 2013). In 1981, the newly elected UK Conservative Government introduced cuts in university funding which averaged 17.9% (Warner and Palfreyman 2003). The allocation of these reductions was overseen by the University Grants Committee (UGC) with advice from officials in the Department of Education and Science (DfES). Through these deliberations, it was decided to focus the reductions on institutions which were deemed to be less academically reputable, although the methods through which this was determined were not wholly transparent (Kogan and Hanney 1999).

The scale of the financial changes introduced by the UGC in 1981 created operational difficulties for many universities and led to the establishment of the Jarratt Committee to undertake a series of efficiency studies of the management of universities. The Jarratt report, which was published in March 1985, recommended that the management and governance of universities should be altered to bring them more closely into line with the practices of business organisations. Among the detailed recommendations was the proposal that “reliable and consistent performance indicators” should be developed to help universities plan and allocate resources more effectively (Jarratt 1985:22). In the report which made these recommendations, it was assumed that these transparency tools would be used by senior managers, university councils and the Government appointed regulator, but that they would not need to be visible to a wider range of stakeholders or indeed the public.

One of the less well remembered but arguably more important institutional changes introduced in the 1980s was the formation by the CVCP of its Academic Standards Group. This group published its first report in 1986 together with three codes of practice covering external examiners, postgraduate training and research degree examination appeals. By outlining agreed expectations, this report, it has been suggested, “can fairly be said to have started the widespread effective discussion about quality and standards in UK universities” (Williams 1992).

Interestingly in hindsight, the first real test of what was intended in the late 1980s to be a more hierarchical and managerial approach to the definition of standards in universities was in Wales. When financial difficulties emerged at the UCC in 1986 following the UGC cuts, the UK Government’s Department of Education and Science appointed a team to investigate and to report on what should be done in response. The results of this investigation, revealed a reluctance of senior managers to reduce costs at the institution in response to the UGC cuts and it was agreed that the best way to resolve these problems was to merge the UCC with its near neighbour UWIST and for the Vice Chancellor of UCC to retire (Smith and Cunningham 2003; Williams 2006).

The removal of the binary divide between polytechnics and universities in 1992 was accompanied by the institutional recognition of national divides in higher education arrangements within the UK through the creation of four separate funding bodies [i.e. Department of Education, Northern Ireland, the Higher Education Funding Council for England (Hefce), the Higher Education Funding Council for Wales (Hefcw) and the Scottish Funding Council (SFC)]. Despite the creation of these different funding and regulatory systems, there was a concerted effort to create UK wide systems for the quality assurance and assessment of teaching and research. After some initial disagreement about how this might best be achieved, the Quality Assurance Agency (QAA) was launched in 1997 with a pan-UK remit to undertake institutional audits, teaching quality assessments, overseas and collaborative audits. These review methodologies provided initially for the numerical scoring of the quality of teaching and learning in higher education institutions audits of their internal quality assurance procedures. These methods developed over time to become a single integrated review methodology with judgements of either: confidence, limited confidence or no confidence in the soundness of an institution’s current and likely future management of academic standards as well as the quality of the learning opportunities available to students.

The development of new systems of academic quality audit and assessment was not without incident. Quality concerns were reported on by the QAA at several universities in the UK between 1997 and 2017. In Wales, two of these cases led to subsequent mergers between the reviewed institution and a near neighbour (University of Wales, Lampeter with Trinity University College 2007–10 and the University of Wales with Trinity St David 2011–13 (Lipsett 2009; BBC 2011).

In 2011, the remit of the QAA was extended by the introduction of a requirement by the UK Border Agency and its successor UK Visas and Immigration (UKVI) for higher education providers to successfully complete a standards and quality review as a condition of their Tier 4 licence to recruit students from outside the European Union. In 2014, the BBC Panorama documentary team examined the operation of the English Language Testing arrangements by a private contractor used by colleges and universities across the UK. After a subsequent investigation by the UKVI, 57 colleges and three publicly funded universities, including the London campus of Glyndŵr University, had their highly trusted status suspended by the Home Office preventing them from recruiting non-EU overseas students until the alleged breaches of procedure had been rectified (THE 2014).

Running alongside the introduction of government-sponsored systems for the assessment and quality assurance of teaching and learning in universities were more formalised arrangements for evaluating the quality of university-based research activity. The first of these evaluations was the Research Selectivity Exercise in 1986 which reviewed a small selection of books, journal articles and other publications submitted by universities across a wide range of subject areas. The methods used in this initial assessment were then steadily extended in successive Research Assessment Exercises in 1989, 1992, 1996, 2001, 2008 and the Research Excellence Framework (REF) in 2014. In the last of these exercises, researchers were required to submit their four best publications over the preceding five years along with accounts of the research environment within which they operated and a selection of impact case studies.

The requirement in the REF 2014 to submit impact case studies extended an initiative begun in 1999 by what is now the Higher Education Business and Community Interaction survey. In England, but not in Wales, this survey has been used to determine the allocation of third-stream funding to support enterprise activity and community engagement through the Higher Education Innovation Fund (HEIF).

Recent reviews of the volume and quality of research publications as well as the content and quality of impact case studies in the last Research Excellence Framework have revealed that universities in Wales are highly productive and have very successfully demonstrated the impact of their work on commercial practice and public policy over recent years (Hewlett and Hinrichs-Krapels 2017; LSW 2017).

Meanwhile, broader evaluations of the consequences of the audit and assessment transparency tools for the operation of UK universities have drawn attention to the ways in which audit changes the way university staff work as they strive to make their institutions more auditable (Power 1999). This, it is suggested, increases the number of people involved in management and governance, reinforces discipline and subject boundaries, while reducing variation in the educational and research methods and processes used in these institutions (Strathern 2000; Morley 2003; Burrows 2012).

League Tables—Choice and Competition

The first national university league table in the UK was published by the Times and Sunday Times newspapers in 1992 (Times 2017). The Guardian newspaper followed this lead in 1999, the Daily Telegraph in 2007 and the Independent between 2008 and 2011 using information collated by Mayfield University Consultants for the Complete University Guide (Dill and Soo 2005; Complete University Guide 2017; Guardian 2017). Meanwhile, the Financial Times began its global MBA league table ranking in 1999 and then steadily expanded the breadth of this annual survey to include seven different types of postgraduate business degree (Bradshaw 2015). The first international university league table was produced in 2003 by a team of academics at Shanghai Jiao Tong University in China. Unlike the earlier national university newspaper league tables, the assessment of individual universities in what has now become known as the Academic Ranking of World Universities is based primarily on their research prowess (ARWU 2017). Times Higher Education followed the lead provided by the Shanghai Jiao Tong league table and produced its own global rankings in association with the QS consultancy in 2004 (THE 2017; QS 2017). This arrangement ended in 2009 when the THE and QS began to produce their own separate rankings. The last of the most widely quoted international tables in the UK is the CWTS Leiden ranking which began in 2007 (CWTS Leiden 2017).

The three national university league tables in the UK referred to most frequently by academics use slightly different measures and weightings to assess and rank institutional performance. All three of these league tables include input measures of the average entry tariff asked of higher education applicants as well as one or more measures of expenditure, whether spend per student, academic services spend, or facilities spend. All three league tables also make use of an outcome measure of students’ career prospects as revealed by their employment or engagement with further education and training six months after graduation. Where the national surveys differ most from one another is in the attention they place on different process and output measures. Here, the Times Good University Guide and the Complete University Guide tend to focus on measures of degree completion and the proportion of firsts and 2.1 s awarded alongside measures of research prowess. Meanwhile, the Guardian league table places greater emphasis on measures of student satisfaction with teaching, feedback and the value added by their education.

The differences between the four international university league tables are more marked. The Academic Ranking of World Universities places a heavy emphasis on the quality of the research completed by current staff and alumni as measured by Nobel prizes and Fields Medals as well as the number of articles published in leading science and medical journals. A similar approach is adopted by CWTS Leiden but with an emphasis on the proportion of highly cited journal articles produced by academics at the ranked institutions. The Times Higher Education World League table, by contrast, includes a range of input measures which include the staff-student ratio and the proportion of postgraduate students at the institution. The output measures considered place a heavy emphasis on research as measured by journal paper citations, productivity and reputation, but there is also the inclusion of a measure of teaching reputation and the scale of international collaborative work. The last of the four league tables, QS, is arguably the least objective if not the most volatile as half of the total score is derived from a questionnaire survey of academic and employer reputation.

The development of the first Shanghai Jaio Tong international university league table was prompted by a desire by academics at this institution to measure the relative performance of universities in different countries (ARWU 2017). Despite this intention, there is growing evidence that these transparency tools are used by international and more able students to help them choose between institutions (Gibbons et al. 2015; Chevalier and Jia 2015). Whether these relationships affect the overall level of university student admissions and whether the impact is of the same magnitude in all nations and regions of the UK is not yet clear. At present, anecdotal evidence suggests that the impact is similar in different parts of Wales and England and while beneficial, in the absence of student number controls, for institutions in the top half of league tables, it appears to have a negative impact on those in the bottom half (NAO 2017).

These effects, it has been suggested, also influence university mission and department activity as they encourage more institutions and academics to copy the behaviours of those organisations and individuals who appear to do well by these measures (Hazelkorn 2008). As Shore and Wright note in their recent review of the impact of rankings on universities:

[a] ‘good dean’ knows in intimate detail how the variables in each of the rankings are constructed and will check that they are registering all the positive figures (about students, staff, income, publications, exam performance, graduate employment, etc.) in precisely the right way for the ranking companies to pick them up in their questionnaires and surveys. The deans explained that the rankings are ‘omnipresent’—that academic decisions about the curriculum, evaluation of subordinates, faculty publication strategies, admissions policies, and budget allocations are all shaped by their likely effect on the school’s numbers and ranking. (Shore and Wright 2015:428)

Measures of Satisfaction and Return - Privatisation

In the mid-2000s, the focus of UK Government sponsored transparency tools shifted towards assessments of the experiences of students as recorded by surveys of their satisfaction by consultancy companies, as well as measures of their subsequent earnings and career prospects as measured by university agencies and government departments. This emphasis on the private returns of higher education activity for individuals has been mirrored in evaluations of research activity where the focus shifted from discipline and subject-based assessment of quality towards a greater emphasis on measures of the economic impact of this research for the commercial and public users and funders of these projects.

The first example of concern with the private returns of the customers of higher education institutions was provided by the National Student Survey (NSS) which was established in 2005 to gather the views by means of an online questionnaire of full and part-time final year undergraduate students studying at universities (NSS 2017). The coverage of this survey was extended to further and higher education colleges offering degree courses in 2008 and to alternative (and in many cases private) providers in England in 2015 (BIS 2015). From its inception, the NSS has been conducted by the private market research company IPSOS/MORI on behalf of Hefce and the other national funding bodies in the UK. The survey consists of twenty-seven questions which ask students individually about their experiences of teaching, assessment and support while studying on undergraduate courses. In 2017, over 260,000 students at 357 institutions across the UK took part in the survey including all but two of the higher education providers based in WalesFootnote 3.

The launch of the NSS by the UK national funding councils in 2005 was accompanied by the parallel launch of the International Student Barometer Survey by another private educational consultancy, i-Graduate (i-Graduate 2017). This survey questionnaire is designed to help university managers to track and benchmark international student opinions of their teaching, learning and wider university experiences while enrolled at UK higher education institutions from initial enquiry to application, assessment and prospects after graduation. Unlike the NSS, there is no requirement for institutions in any UK nation to take part in this survey, however, six of the eight universities in Wales and most of the universities in England subscribe to this service (i-Graduate 2017).

Following the lead of the NSS and ISB, the Higher Education Academy (HEA) in the UK developed four survey questionnaires to help university staff assess their students’ educational experiences.

The first of these questionnaires is the annual Student Academic Experience Survey (SAES) which was developed in 2006 in association with the Higher Education Policy Institute (HEPI) to assess the experience of undergraduate students across the higher education sector. The most recent survey in 2017 drew on 14,057 responses to a survey questionnaire which was posted to 70,000 members of the YouthSight student panel, which is recruited by the Universities and Colleges Admissions Service (UCAS) and which pays students £4 to £10 to complete each survey (YouthSight 2017).

The second questionnaire is the biennial Postgraduate Research Experience Survey (PRES) which was launched in 2007 to assess students’ experiences of supervision, institutional resources, research community, progress and assessment, as well as skills and professional development. The last survey, in May 2017, received 57,689 responses from students in 117 higher education institutions accounting for 53% of the UK’s postgraduate research student population (HEA 2017).

The third HEA questionnaire is the annual Postgraduate Taught Experience Survey (PTES) which began in 2009. This survey asks students questions about teaching, learning, engagement, assessment, feedback, organisation and skills development on their course. The last PTES in 2016 gathered responses from 82,000 students at 108 institutions (HEA 2017).

The fourth of the HEA questionnaires is the UK Engagement Survey (UKES) which was designed and trialed in 2013 to enable university managers to understand their undergraduate students’ experiences in the areas of: critical thinking, learning with others, interacting with staff and reflecting and connecting, as well as course challenge, engagement with research, staff-student partnership, skills development and how students spend their time.

The UK Government White Paper on Higher Education, “Students at the heart of the system” published in 2011 drew attention to the variety of different measures of higher education performance and recommended that summary information should be published on their websites (BIS 2011). Many of these different measures were subsequently brought together by the Higher Education Statistics Agency (HESA) in a Key Information Set (KIS) for each institution. Developed to provide prospective students with the information, it was suggested, they would need to enable them to make informed choices about which university to attend, the KIS has more recently been collated on the UNISTATS website (UNISTATS 2017). This information includes student entry requirements, NSS results, graduate destination data and details of professional accreditations, as well as links to more detailed curriculum information.

The Longitudinal Employment Outcomes (LEO) dataset was developed in 2015 to improve the graduate destination data available to students and to enable applicants, institutions, regulators and others to gain a better understanding of the longer-term earnings prospects for graduates from different courses and institutions (DfE 2016). The LEO dataset was constructed by linking data collected by Her Majesty’s Revenue and Customs (HMRC) with the individual learner records of graduates from different institutions.

More recently, information of the type developed for the UNISTATS website has been combined with information submitted by higher education providers to form the basis for judgements in the Teaching Excellence and Student Outcome Framework (TEF) (Hefce 2017). The TEF is a voluntary system for rating undergraduate teaching, learning and student progression at an institution level. The scheme was piloted in 2016 with an assessment of the undergraduate higher education provided in each participating institution under three headings of teaching quality, learning environment and student outcomes and learning gain.

The first set of TEF results published in June 2017 divided institutions into four rating categories: gold, silver, bronze and provisional. Across the UK, 292 higher education providers took part in the first TEF, and they achieved 60 gold, 115 silver, 53 bronze and 65 provisional ratings. In Wales, twelve higher education providers or validators of awards participated in TEF and achieved 1 gold, 6 silver, 4 bronze and 1 provisional rating (Hefce 2017).

There have been no formal evaluations of the impact of TEF to date, however, provisional analysis by the Hotcourses website indicates that the ratings have had a significant impact on international students’ interest institutions with marked increases in the number of students from India, Brazil, Thailand and Turkey considering applying to institutions that had achieved a gold rating (Porter 2017).

Devolution and Transparency

The impact of league tables and the TEF on the pattern of local and overseas student recruitment, as well as the signals that these systems send to university leaders and academic staff about what is valued, what should be resourced and what should be rewarded has prompted Welsh Government ministers to consider the current and potential future operation of transparency tools in Wales.

Reviews of the funding and oversight of post-compulsory education in Wales by Professor Ian Diamond, Professor Ellen Hazelkorn and Professor Graeme Reid in 2015, 2016 and 2017 recommended the development of a long-term strategy for the provision of tertiary education and research to cover all aspects of government funded activity until 2030 (Hazelkorn 2015; Diamond 2016; Reid 2017). These reports also recommended that the Welsh Government should take a broad view of the objectives for the tertiary education offered to people from the age of 16 to retirement in Wales, as well as the research and innovation which underpins this activity and which supports companies and public services in Wales and the wider World. These recommendations were consistent with the requirements of the Well-being of Future Generations Act (Wales) (2015) (Table 1).

Table 1 Well-being of Future Generations Act (2015) Well-being goals

This Act recommended that public bodies in Wales should encourage: 1. Long-term planning; 2. problem prevention; 3. integration between service providers; 4. collaboration between organisations; and, 5. involvement of the people who will be affected by any activity in its planning and development. The Act also specified seven Well-being Goals and a commitment to developing Well-being indicators which will be monitored and reported on at regular intervals by an Independent Commissioner for the Well-being of Future Generations (2015, see Fig. 1). Building on these recommendations, Professor Harvey Weingarten was commissioned in 2017 to develop a series a performance measures/transparency tools to help Welsh Government ministers, tertiary education leaders, academics, students and other stakeholders gain a clearer understanding of the operation and effectiveness of different aspects of tertiary education provision (Weingarten 2017).

Professor Weingarten is scheduled to report on his recommended performance measures for the tertiary education sector in Wales in 2018. In addition, it is intended that a new legislative bill will be introduced to amend the laws regulating the provision of higher and further education and work-based learning in Wales and to establish a Tertiary Education and Research Commission for Wales (TERCfW). This new body, it is suggested, will develop and maintain a strategy for devolved tertiary education, allocate funding, commission quality assurance and enhancement activities and monitor performance using a variety of transparency tools.

Conclusions

This chapter has reviewed the development of transparency tools for the monitoring of higher education in Wales over the last one hundred and ninety years and commented upon who developed these tools, where and when they were implemented, as well as describing the key features of each successive set of measures. Using Michael Barber’s normative model of different forms of intervention by government and other agencies, the picture that has emerged is one of an increasing range of ever more complex transparency tools. Each new tool that has been developed has been added to an existing panoply of measures and monitoring devices. With these developments, academic leaders, governors, and regulators have been presented with a bewildering array of different indicators which are more or less visible depending upon the source of information and its funding.

The external examiner system, established in the mid-nineteenth century, remains a central tool for the determination and maintenance of academic standards as well as the measurement of student achievements. However, despite over a century of operation, this system remains largely invisible to students and other stakeholders who are not employed by universities or involved directly in their regulation. From the mid-1980 s, the traditional approach of trust and altruism was supplemented by an increasingly elaborate system of hierarchy and targets introduced by government-sponsored agencies to monitor and reward the performance of higher education institutions and their academic staff. The focus under these arrangements has switched over time from a focus on financial inputs towards greater attention to measures of process and output while attempting to link these to symbolic and financial rewards at an institutional level. At the end of the 1990s, national newspapers and universities in different parts of the World sought to support greater student choice and competition between institutions through the publication of university league tables. More recently, building on the market ethos promoted by university league tables, there has been a further shift towards the privatisation of the measures used to monitor activity. There are many more survey instruments available for monitoring student satisfaction and experience at university, but the information collected using these tools is generally confidential and not available to prospective students and other stakeholders who are not employed by the University.

In Wales, Welsh Government ministers have sought to avoid market-based systems of higher education. Instead, the Well-being of Future Generations Act 2015 and a series of reviews of higher education governance, funding and performance management have sought to promote a system-wide view of provision and its development. This approach has not been without its own challenges. For example, the recent consultation over baseline regulatory requirements and performance measures for higher education institutions in Wales as well as a revised quality code designed by the UK Standing Committee for Quality Assessment for application to higher education across the UK has revealed some of these tensions (Hefcw, 2017a, b; QAA 2017).

The tension between national and institutional interests has been a recurrent theme in the development of transparency tools in Wales. The Royal Commission report on University Education in Wales in 1918 sought to resolve this tension by bolstering the voice of national interests through increases in the number of local representatives on the court of the University of Wales and reductions in the influence of academics by reducing the size of the University’s Senate/Academic Board. In more recent times, the university parliament that Lord Haldane envisaged has been realised, albeit in a different form, through the creation of the Welsh Government and Assembly and their associated scrutiny committees and cross-party groups. However, the development of measures that enable progress with the achievement of national as well as institutional and individual student and academic judgements is inevitably challenged by the presence of wider UK and global performance measures, and league tables are often a greater focus of attention within institutions. The recently announced Weingarten review has been established in recognition of these tensions and will seek, where possible, to reconcile if not resolve these challenges.