Academic logics in changing performance measurement systems: An exploration in a university setting

Enrico Guarini (Department of Business and Law, University of Milano – Bicocca, Milano, Italy)
Francesca Magli (Department of Business and Law, University of Milano – Bicocca, Milano, Italy)
Andrea Francesconi (Department of Economics and Management, University of Trento, Trento, Italy)

Qualitative Research in Accounting & Management

ISSN: 1176-6093

Article publication date: 3 February 2020

Issue publication date: 12 March 2020

3602

Abstract

Purpose

The purpose of this study is to analyse how academic staff cope with the new culture of performance measurement and assessment in universities. In particular, the study aims to shed light on how external pressures related to measurement of research performance are translated into organisational and individual academic responses within the university and the extent to which these responses are related specifically to the operational features of performance measurement systems (PMS).

Design/methodology/approach

The study is based on a case study conducted in an Italian public university and based on interviews with a cross-disciplinary sample of faculty members.

Findings

The study provides insights into how linking financial incentives and career progression to research performance metrics at the system and organisational levels may have important reorientation effects on individual behaviours and epistemic consequences for the academic work.

Research limitations/implications

The study is based on interviews, so one limitation is related to the risk of researcher and interviewee personal bias. Moreover, this study is focused on one single case of a specific university setting, which cannot be fully representative of the experiences of others.

Originality/value

The study contributes to the literature on management accounting by exploring the factors that might explain why the unintended effects of PMS on academics’ behaviour reported by several studies might occur. From a practitioner’s point of view, it shows features of PMS that may produce unintended effects on academic activities. It also highlights the need to rethink PMS for the evaluation of university performance through the involvement of different stakeholders.

Keywords

Citation

Guarini, E., Magli, F. and Francesconi, A. (2020), "Academic logics in changing performance measurement systems: An exploration in a university setting", Qualitative Research in Accounting & Management, Vol. 17 No. 1, pp. 109-142. https://doi.org/10.1108/QRAM-06-2019-0076

Publisher

:

Emerald Publishing Limited

Copyright © 2020, Enrico Guarini, Francesca Magli and Andrea Francesconi.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

In recent decades, in several countries, there has been a profound transformation in the concept of the university (Wedlin, 2008). Scholars have spoken about marketisation or corporatisation of universities in describing the adoption by universities of management logic and practices (Parker, 2002; 2011). The intensity of the phenomenon has been different in different national contexts, and has caused universities to compete to attract students, to obtain research funds, and to increase other types of revenue (i.e. by fund raising activities). Nevertheless, universities still remain responsible for serving the public interest by achieving their goals related to higher education, research (both theoretical and applied) and knowledge dissemination processes. However, there are conflicting trade-offs among these three missions, both at the individual and institutional level, which are driven by conflicting expectations – some tangible, others intangible – that various stakeholders place on universities. These expectations have changed continuously over time. Universities developed in the middle ages as a means of providing education (from the Latin universitas, meaning “community”). Initially, lessons were readings and commentaries on philosophical and legal texts, and various teachers and students began to organise themselves into autonomous corporations or universitates, from which the self-organising nature of this institution originates, and which is at the centre of the traditional autonomy and independence of the academy (Rüegg, 2009). Since the nineteenth century, universities have evolved into more complex environments in which the combination of high-quality education, research and dissemination activities is necessary in order to guarantee a fertile environment for creating new knowledge and educating future leaders. The societal impact of universities, and of research in particular, has gained importance for the development of new knowledge and a highly educated workforce. Alongside greater expectations for universities to be more responsive to national economic needs, in many countries, the costs of higher education have grown enormously, with consequences for public budgets.

As calls have been voiced for more transparency in order to ensure value for money of public expenditure, governments around the world have intensified mechanisms to measure the performance of universities, under the principles and reforms of new public management (NPM) (Kallio et al., 2017; Parker, 2012; Ter Bogt and Scapens, 2012). This has led policy makers in several countries to introduce funding schemes linked to university performance, i.e. performance-based funding (PBF) (Claeys-Kulik and Estermann, 2015; Dougherty et al., 2016; Organisation for Economic Cooperation and Development, 2005). The current global financial crisis has exacerbated constraints on public budgets and has put growing pressures on funding regimes for higher education. Such changes are particularly significant in Europe because of the traditional reliance of universities on public funding. Performance measurement systems (PMS) have influenced the academic ethos in universities (Kallio et al., 2016) and have increased the importance of academic groups and departments that drive institutional performance in grant competition (Edgare and Geare, 2013).

Universities, whether public or private, cannot be exempt from external controls. Where research is funded by the public, it is important to have in place a process for determining appropriate and equitable distribution of public money to ensure maximal impact in terms of public benefit. Moreover, in a quickly changing world, even universities must adapt their business models to the changing economic and social context. The question arising is therefore whether universities are changing their processes in a manner that is coherent with the achievement of their multidimensional missions in this new context. It is a source of great concern to see how the process of research assessment has, inevitably, encouraged competition between departments and disciplines, and created rankings between individuals on the basis of “research excellence”. This issue has driven the authors’ interest in conducting the present study, as scholars and members of business and management research groups.

Scholars in management accounting have pointed out that performance evaluation practices have hindered the process of knowledge production to such an extent that the traditional autonomous model of universitas has been replaced by a system of power-relationships among academics exerted through quantitative performance measures (Kallio et al., 2017; Neumann and Guthrie, 2002; Parker et al., 1998; Taylor et al., 1998; Ter Bogt and Scapens, 2012).

Several empirical studies have reported dysfunctional effects on academic work of PMS (Kallio et al., 2016; Pop-Vasileva et al., 2011; Thorsen, 1996; Tytherleigh et al., 2005) and increased scrutiny by university administrators (Macdonald and Kam, 2007; Parker and Jary, 1995; Prichard and Willmott, 1997). Such effects are related to issues such as game playing (Lewis, 2014), increases in internal bureaucracy, constraints on innovation (Behn and Kant, 1999), and performance measurement professionalism (De Bruijn, 2008). Within universities, such unintended effects may be linked to specific issues such as distorsions of the selection of journals to publish in, the artificial maximisation of citations, the choice to undertake research in mainstream research topics, and the perceived lack of relevance of exclusively local interest research topics.

However, much of the previous literature (for a thorough review, see Grossi et al., 2019b) has adopted either a sector-level approach to higher education (HE) or a narrow focus on the diffusion of PMS within a specific university setting (Sutton and Brown, 2016).

Prior studies have investigated the effects of institutional arrangements such as research assessments schemes and journal rankings (Macdonald and Kam, 2007; Martin-Sardesai and Guthrie, 2018; Martin and Whitley, 2010; Modell, 2003), or individuals’ perceptions of issues related to academic work, such as research activities, stress and academic ethos (Kallio et al., 2016; Pop-Vasileva et al., 2011).

There are few studies that investigate the operational functioning of PMS within universities (Dobija et al., 2019) or that investigate how the design and operation of these systems might cause unintended effects on academic behaviour (Agyemang and Broadbent, 2015; Sutton and Brown, 2016).

In this context, the research questions addressed by the present study are:

RQ1.

How are external pressures to measure performance translated into organisational and individual academic responses within the university?

RQ2.

How are the individual responses related to the operational features of PMS?

This article presents the findings of an exploratory study conducted in an Italian public university and based on interviews with a cross-disciplinary sample of tenured faculty members.

The study contributes to management accounting literature by focusing on institutional factors and features of performance measurement systems that contribute to explaining why the unintended effects on academic behaviour that are reported in several university settings might occur. The study also responds to Kallio and Kallio’s (2014, p. 585) call for more comparative studies on the suitability of PMS in HE involving universities and fields of science in different countries.

The findings show that the design of PMS within universities is not neutral with regard to individual responses and effects on academic work, which are differentially affected by external research assessment systems and funding schemes. In particular, the study shows that aligning university PMS with external funding schemes can bias the allocation of faculty and resources among the departments.

Moreover, whereas the study has confirmed certain unintended effects of PMS that are identified by previous studies, it has also highlighted how performance measures can have important epistemic consequences for the process of knowledge production across scientific fields, and can hinder teaching and outreach activities. It is argued that PMS should be designed in a way that allows public universities to achieve a balanced mission in research, teaching, and outreach activities.

The remainder of this paper is structured as follows. The next section provides a literature review and a theoretical framework for the empirical analysis. Section 3 describes the research methodology and the case setting used in this study. Section 4 provides information about the Italian HE context by focusing on the main characteristics of the PMS that is defined at the national level to assess universities’ performance. Section 5 analyses the features of the PMS of the university analysed and discusses evidence from the study regarding the different types of academic responses to PMS. The final section presents a conclusion and discusses study limitations and suggestions for future research.

2. Theoretical background

2.1 Performance measurement in universities

Since the 1990s, a radical growth of new managerial models and methods inspired by NPM have impacted university operations, with changes in governance structures, accountability mechanisms and decision-making processes (Parker, 2002). PMS, quality assessment and audit systems have been implemented in universities under the auspices of NPM reforms (Kallio et al., 2017; Parker, 2012; Ter Bogt and Scapens, 2012). Literature on this topic has grown in parallel, investigating the effects of these changes on academic work. Two types of approaches of inquiry have emerged:

  1. one that has investigated the effects of these systems at a sector-wide level; and

  2. the other that has looked at the effects of changes on individuals at the micro-level within organisations.

The analysis hereinafter follows these two main categories of studies.

Performance measurement takes place in different countries through mechanisms of external assessment of the research quality of universities, which have then been linked to funding to support efficient allocation of resources. A number of authors have studied various country systems. For example, the first exercise of external assessment of research – RAE: Research Assessment Exercise (currently REF: Research Excellence Framework) – took place in the UK in 1986 under Margaret Thatcher’s government (Rebora and Turri, 2013). Other European governments, such as Denmark (Opstrup, 2017), Finland (Kallio et al., 2016; Kallio et al., 2017), Ireland (Morrissey, 2013), Italy (Rebora and Turri, 2013), Portugal (Melo et al., 2010) and Sweden (Modell, 2003) have developed their own systems for measuring research performance. Other non-European countries have also introduced research assessment systems, such as Australia (Neumann and Guthrie, 2002) and the USA. Audit mechanisms have also been introduced, providing accreditation of institutions and programmes as well as the measurement of teaching quality.

In response to a growing audit culture and related government policies, and competition created by comparison of measures between institutions, universities have adapted their internal structures and management styles and developed incentives organised around the idealised model of corporate performance to enhance research excellence and impact. These changes have gradually transformed the institutional logic from “universitas” to the “entrepreneurial university” (Czarniawska and Genell, 2002; Parker and Guthrie, 2005; Pop-Vasileva et al., 2011; Saravanamuthu and Tinker, 2002; Winter et al., 2000).

Another important consequence of this transition is represented by the increase in administrative power. This effect has been associated with the implementation of a hierarchical management model which replaced the traditional committee-based model. This new model is “characterized by a significant increase in the number of ‘professional’ management appointees in the central and faculty/school bureaucracy as opposed to elected senior academics who have traditionally functioned as deans” (Neumann and Guthrie, 2002, p. 725). Parker (2011, p. 441) claims that “universities have increasingly moved toward the redefinition of university vice-chancellors or principals as chief executive officers (CEOs), with government councils being downsized and composed more in the nature of corporate boards, with a pre-dominant membership drawn from industry and commerce”.

The shift of university management to this more corporate style is, also, seen as a direct threat to the academic freedom (Taylor et al., 1998; Melo et al., 2010) of academic staff as they are used to having a greater degree of flexibility and autonomy in their work (Bellamy et al., 2003). Innovation and novelty are tempered by the constraints of the control system; this produces a strengthening of conformity and superficiality, especially in certain fields such as social sciences (as opposed to physical sciences), creating constraints on the development of multivocality and leading to ambiguity (Gedron, 2008). Nevertheless, despite the shift towards corporatisation, collaborative and collegial practices still remain in place within universities (Christopher, 2012).

With regard to individuals, several studies have looked at how the growth of performance measurement practices in academia has generated significant effects on individual academics’ behaviour.

The introduction of competitive funding mechanisms in HE is found to generally increase research productivity across all types of institutions (Creamer, 1998, Leech et al., 2015). Great emphasis is now given to outputs such as refereed journal publications (Saravanamuthu and Tinker, 2002). For this reason, journal rankings have proliferated in the realm of HE from 1999 onwards and have become one of the most central concerns within business schools (Gray et al., 2002; Parker and Guthrie, 2005; Parker et al., 1998).

Gendron (2008, p. 100) underlines that “the performance measurement scheme based on journal rankings tends to become increasingly influential in a growing number of disciplines, putting significant pressure on researchers to publish in ‘top’ journals to ensure that they can show a displayable productivity, otherwise their careers are at risk of perishing”.

One of the dysfunctional effects of these systems, for example, is that “publication is now viewed as the objective of research, rather than the dissemination of the knowledge contained within it” (Steele in Bazeley, 2010, p. 67).

In addition to the effects on researchers’ agendas, there are also individual/psychological effects.

Gendron (2008) has pointed out that such changes lead to a revolution in the academic world; as academics are no longer professors, they become performers (Gendron, 2008).

The need to deal with and adapt to the new performance management culture has led academics to feel overworked, pressured, demoralised and frustrated (Pop-Vasileva et al., 2011). For example, research performance rankings can be dangerous because they pit institutions and individuals against each other. Ter Bogt and Scapens (2012) focused on the implementation of more quantitative PMS within two European universities highlighting an increase in the use of quantitative measures of performance. These practices can induce anxiety, uncertainty and can inhibit creativity and innovation (Ter Bogt and Scapens, 2012).

The consequence of PMS is stress in the academic work, which is mainly attributed to meeting deadlines, frequent interruptions, excessive paperwork, work hours and an increase in conflicts (Dobija et al., 2019; Pop-Vasileva et al., 2011, p. 409, Thorsen, 1996; Tytherleigh et al., 2005).

Previous literature (Anderson, 2006, Anderson et al., 2002) also adds that increased workloads in respect to teaching, research and administration have had an adverse effect on the job satisfaction of academic staff.

Using a different perspective, a few studies have investigated the operational aspects of PMS within universities. In particular, Agyemang and Broadbent (2015) analyse how the management control system is connected with the processes of organisational change, and in particular, how this system is influenced by the design of management control systems in the organisation. Important context variables are, therefore, the external environmental regulators, organisational context and actions taken by the individuals who work in the organisation.

These scholars argue that all these variables interact in explaining the effects of management control systems on academics. In particular, they point to a case in which the internal management control systems developed by academics might amplify rather than weaken the controls imposed by external research performance assessment. Moreover, in their theoretical conceptualisation, Agyemang and Broadbent (2015) argue that these connections lead to a sort of “symbolic violence” process that led individuals and organisations towards a “reorientation”, often associated with individual “gaming” of the system. Dobija et al. (2019) investigate the different uses and users of PMS within universities, as resulting from external factors (isomorphic pressure) and endogenous factors. They show how uses of PMS in universities are strongly dependent on the attitudes and “reactions” of the various internal actors involved. According to the authors, when individual actors deny the utility of PMS, they are inclined to develop “resistance” strategies.

Sutton and Brown (2016) study how universities exert management control without threatening the autonomous motivation of their researchers, and in particular, how the use of PMS could prevent negative effects on academic work. Their study represents an attempt to understand the operational implications of some macro-level issues such as journal ranking, funding policies and performance assessment schemes, by examining the nature and operational consequences of management control systems within universities. They argue that, to avoid unintended effects on research, management control systems should “operate with an illusion of no control”. This illusion is achieved by designing incentive systems that favour long-term control, and that value the autonomy of researchers. Therefore, a better understanding is required regarding how PMS are designed within universities and what effects they have on individual behaviours.

This study adopts a management and accounting perspective to gain a better understanding of the case examined, as is explained in the following section.

2.2 Theoretical framework

New institutional theory (Grenwood et al., 2008) has been used in recent years to study performance measurement practices and accounting changes in organisations, in particular in HE (Dobija et al., 2019; Modell, 2001; Parker, 2011; Rebora and Turri, 2013; Saravanamuthu and Tinker, 2002; Tucker, 2016), and seems to be a reasonable theoretical basis for the purposes of this study. Neo-institutionalism used in accounting studies views PMS as “institutions” – i.e. rules and routines (Burns and Scapens, 2000) – located in a social context that have influence on peoples’ behaviours within organisations.

The term ‘institutions’ also includes the taken-for-granted assumptions within an organisation that inform and shape the actions of individuals, while these assumptions themselves are outputs of social action.

In particular, attention has been given in the literature to the institutional context and the institutionalisation process by which PMS as social practices “come to take on a rule-like status in social thought and action” (Meyer and Rowan, 1977, p. 341).

One of the most important concepts developed in the institutional theory is isomorphism, or the organisation’s tendency to confirm and homogenise itself with its institutional context in response to various external pressures in order to secure social approval or legitimacy.

Tolbert and Zucker (1983) highlight the concept of legitimacy in their study, and identify two different motivations for adoption of changes: the motivation of early adopters to improve operations, and the motivation of later adopters to appear modern, efficient, and rational, to obtain “secure social legitimacy” (Tolbert and Zucker, 1983; see also Brunsson and Sahlin-Andersson, 2000). In contrast, Fligstein (1987), instead, offers, what is defined by Greenwood et al. (2008, p. 9) as “an exogenous shock model”, showing how disruptive changes in legal frameworks enable shifts in organisational behaviour by altering patterns of incentives and opportunities (Fligstein, 1987).

The literature has identified three types of isomorphism (DiMaggio and Powell, 1983): coercive, whereby external powerful organisations force an organisation to adopt a new rule; normative, whereby pressure is exerted by a professional organisation; and mimetic, whereby an organisation imitates another because it wants to be more rational and to avoid appearing deviant or backward. Some studies on PMS in universities have recognised only one of the three types of isomorphism in response to external pressures (Carmona et al., 1998; Carruthers, 1995; DiMaggio and Powell, 1983; Fogarty, 1996); in other cases, some characteristics of combined isomorphism can be identified (Parker, 2011). The adoption, for example, of accounting standards can be considered both as a result of external coercive pressures and also the need to incorporate within each organisation the standards established and introduced by professionals (normative isomorphism) (Modell, 2001). Similarly, organisational members may take on board the legal and cultural rules and expectations of the society around them through a process that combines both coercive and mimetic isomorphism (Parker, 2011).

However, the theory has emphasised that organisations do not always mechanically follow institutional pressures, but that there are different organisational responses to the same institutional pressures (Oliver, 1991), including conformity or resistance to the new norms and values (Hyvonen et al., 2009; Jarvinen, 2006). Hence, PMS may be acted upon in different ways in universities (Dobija et al., 2019) according to the different organisational responses.

Burns and Scapens (2000) developed a framework for interpreting how management accounting changes can be (or not be) institutionalised in the organisation. According to their model, the process of institutionalisation starts with the encoding of new principles into rules which consist of formalised statements of procedures. Then, these rules must be translated into new behaviours by organisational actors so that they become institutionalised. This process requires changes in “ways of doing things”, and so it may be subject to resistance if the new rules and behaviours challenge existing values and beliefs.

Recent works in the institutional theory have emphasised the importance of variables that explains the relationship between institutions and individual actions. This evolution recognises the importance of decision-making, the power of individual actors/groups within the organisation to impose new rules, and the role played by institutional logic. The latter is defined as the “broader cultural beliefs and rules that structure cognition and guide decision-making in a field” (Lounsbury, 2007, p. 289). The notion of institutional logic in organisational studies is connected to the set of rules and practices, rewards, and sanctions that are developed and socialised by individuals within an organisation so that specific behaviours become legitimated and expected (Thornton and Ocasio, 2008).

Institutional logics, therefore, are experienced by individuals; such logics influence individuals’ mind-sets and define the behavioural roles by which actors carry out activities and take decisions.

Universities are regularly characterised by and subject to different and competing institutional logics (Ahrens and Khalifa, 2015; Bastedo, 2009; Grossi et al., 2019a; Kitchener, 2002; Kilfoyle and Richardson, 2011; Lounsbury, 2007; Narayan et al., 2017). Thus, the institutional logic approach may usefully integrate Burns and Scapens’s framework because it allows a better understanding of how individual and organisational behaviours interact in a particular academic-driven organisational context.

Institutional logic in academia has attributes that are connected these different types of with the autonomy of research, collegiality, and lack of central control (Grossi et al., 2019b). The typical values and norms that characterise “academic logic” are drawn primarily from the model of science that emphasises research freedom, the openness of research results, and rewards in the form of peer recognition (Merton, 1973). It can be considered a particular representation of the societal-level professional logic by means of which academia advances fundamental knowledge in society (Conrath-Hargreaves and Wustemann, 2019).

Institutional logics are difficult to observe directly; however, they manifest themselves in particular organisational forms, managerial practices and individual decisions (Greenwood et al., 2010; Thornton and Ocasio, 2008). Institutional logics are often conceptualised in the form of abstract sets of norms and rules that facilitate classification and comparison. For example, scholars have contrasted co-existing academic and business logics within universities. Grossi et al. (2019a, pp. 5-6) argue that academic logic “focuses on the specific nature of research activities and the interests of the academic community”, in contrast with business logic, which focuses on managerialism, audit culture, and performance control.

Academic and business logics co-exist in universities as a combination of professional/academic and managerial/administrative values, both at organisational and individual levels. These multiple logics can be in competition and may be difficult to reconcile, resulting in ambiguous goals and rules for individuals, who react by either maintaining or changing their behaviour.

This study contributes to this theoretical categorisation by adding that academic logic, within a given university setting, can be further articulated into multiple kinds of academic logics that are connected to the traditions of different disciplines. The article suggests that a deeper understanding of academics’ responses to PMS can be gained by considering the potential differences between disciplines that influence the notion of academic work, i.e. the operating values and norms applied in carrying out research, teaching, and third-mission activities in universities.

The notion of combining these three activities in the academic work is commonly rooted in academic ideology and institutionalised in academic practices. The usual assumption is that academic activities play complementary roles and are mutually beneficial (Shils, 1983). However, this representation oversimplifies the relationship and interactions between these three activities. Different academic logics are taken within the organisation insofar as the three categories of activities are viewed as distinct, rather than unitary, and as competitive and conflicting, rather than complementary. Moreover, the operational features of conducting each academic activity, as well as the rules for measuring their performance, may differ across disciplinary traditions, which can broadly be referred to as the hard sciences and the social sciences (Borlaug and Langfeldt, 2019). For instance, the hard science favours quantitative research approaches, dissemination of research results by means of journal articles, use of bibliometric indicators, spin-out commercialisation companies and patents for measuring research performance; the social sciences favour qualitative research approaches, dissemination via books, peer-review assessment and various forms of engagement for third-mission activities. More specifically, individual academics are socialised into their own discipline’s norms and rules during the lifecycle of their professional career. As a consequence, academics taking tenure and roles within universities are expected to take decisions and behave according to those specific values.

The broad concept of institutional logic is important in studies of management accounting in HE because it affects the creation of PMS, but it is less commonly investigated how institutional logics are affected by PMS. Scholars have pointed out the symbolic value of accounting rules in organisational settings and showed that these rules contribute jointly with other belief systems in creating specific logics for individual action (Dent, 1991; Hines, 1988; Hopwood, 1987; van Helden and Reichard, 2019). In particular, the use of PMS involves certain rules that operationalise symbolic values and pressures (e.g. action assumptions) that can affect individual behaviours and thus change organisational actions and routines. Within this process, actions and routines, in turn, contribute dynamically to the consolidation or the affirmation of a new particular institutional logic (Burns and Scapens, 2000).

However, it is not self-evident how external performancer measurement pressures’ are translated into organisational and individual academic responses in a university context characterised by different logics and pressures. Moreover, it is not well known how individual responses are related to the specific operational features of PMS. Increased understanding about the use of PMS in a university setting may offer insights into the effects of performance measurement in the public sector. Thus, the present study examines and categorises external pressures, the features of PMS as a response to these pressures, and the related institutional logics.

Figure 1 describes the conceptual framework used in the case analysis. Notwithstanding the fact that both academic and business logics co-existing and are in conflict in a university setting (Grossi et al., 2019a), in order to shed light on how PMS impact on individual academic behaviours, the proposed framework considers only academic logics.

External NPM pressures increase the use of PMS within organisations. In particular, academic actors are affected by pressures and logics of their discipline, but they change and translate new rules when bringing them into practice, and this occurs at two levels. At the “organisational” level, external pressures are filtered by the governing bodies of the organisation, which decide the features of PMS and create decision-making routines associated with their use. At the individual level, PMS affect individual actions and behaviours. The outcome at both levels, depends on the pressures, institutional logics, and rules acted upon in the organisation. In Figure 1, the two-way arrows indicate mutual interaction between pressures, institutional logics, and organisational responses.

3. Research methodology

3.1 Case method

Given the exploratory aim of this article, this study uses a case study approach (Stake, 1995; Yin, 2014). The university considered in this study (hereinafter, “university”) is a young and multi-discipline public university located in Italy with student numbers in the range of 30,000 to 40,000 and a dual focus on teaching and research activities. About 62 per cent of students are enrolled in social sciences programmes, the remainder in mathematical, physics and natural sciences programmes. The university is divided into four schools and 14 departments, and there are currently 954 tenured academics and 800 administrative staff employed. The university has strong research performance; it was well ranked in Italy’s Valutazione della Qualità della Ricerca (VQR) research assessment conducted in 2012 and 2016 and also achieved several “departments of excellence” rankings, winning funding in the last 2018 competition.

The university was selected because of the emphasis on quantitative metrics in its internal PMS. It represents an extreme case for two reasons: first, the university designed the PMS by replicating exactly the indicators used by the government for external performance assessment and funding; second, the university assigned the highest weight to research performance metrics in comparison with other Italian universities, as reported in a previous study (Francesconi and Guarini, 2018).

To obtain perceptions from different research cultures (Borlaug and Langfeldt, 2019; Hammarfelt and Rushforth, 2017; Horta and Santos, 2019), this study has considered a purposive-convenience sample of eight departments (out of 14). Departments were selected by type of scientific discipline –“big or hard sciences” (HSc)/social sciences (SoSc) – and their performance in the last national competition for funding departments’ research excellence by classifying them under the categories of “excellent and funded” (hereinafter “excellent”) and “not excellent or not funded” (hereinafter “not excellent”). The latter criteria enable examination of similarities and differences between faculties that exhibited relatively high or low research performance as “department of excellence” funds are granted on the basis of the previous five-year research performance (VQR) and of quality of the research development plan. Hard sciences included all scientific disciplines such as mathematics, physics and natural sciences.

Among the eight departments, four were selected from the hard science type and four from the social science type; then, they were classified according to their performance in the “department of excellence” competition. These variations in sampling were expected to elicit a range of responses in the way PMS were expected to be received, reflecting the variety of institutional logics present, e.g. organisational circumstances and discipline traditions.

Table I shows features of the sample.

In terms of composition, the selected departments ranged in size from 37 to 64 members, with interviewees from different scientific disciplines and with varying tenure: 18 of the 32 respondents were male (56 per cent) and 14 were female (44 per cent); 15 were full professors (47 per cent), 12 were associate professors (37 per cent) and five were assistant professors (16 per cent).

3.2 Data sources

Individual face-to-face interviews were considered the optimal method for collecting data on academics’ views in response to the metrics applied by the university for research assessment. This approach is useful because it offers a rich account of the interviewee’s experiences, knowledge, ideas and impressions in a particular social context (Alvesson, 2003; Chandler, 2008; Qu and Dumay, 2011). The study also benefited from participant observation and our own lived experience in the analysed case.

To obtain different perspectives, two types of interviews were developed: one for the department heads and the other for faculty members. The department heads are professors elected by peers; they have an important role in the design and implementation of PMS as their delegates seat in the governance of the university, and they are responsible for coordination of research, teaching and third-mission activities in their unit. The interviews for department heads were aimed at capturing the main features of PMS at the university level and actions taken in response to institutional pressures. Tenured professors were asked about how they give value to research, teaching or the third mission; the performance requirements as members of the faculty and the metrics that most strongly influence their behaviour and perceptions of the effects of their choices.

In this study, 32 interviews were conducted with full-time tenured professors (24) and department heads (eight) of the eight departments of the university during the period from July to December 2018. All interviewees had research and teaching appointments as required by legislation.

To introduce the topic, interviewees were asked to describe their familiarity with features of PMS in place in Italian HE and within the university. Thereafter, questions focused on understanding the extent to which their academic behaviour is related to performance metrics.

In order to avoid potential bias related to personal cognisance of the interviewees, they were selected randomly. Sampling continued until the authors reached saturation. For the purpose of this study, saturation means the saturation of knowledge (Strauss and Corbin, 1998), in that authors of this study recognised emerging patterns in the experiences of the interviewees as long as the interviews confirmed what was already perceived from previous interviews.

Most interviews were conducted jointly by the authors, and lasted for approximately 30-45 min each. Interviews were conducted in an informal manner in the offices of the interviewees as per their choice, and were recorded and transcribed for later coding and analysis. The interview data were complemented by other sources including public data and the university’s internal data (accessed by the authors, see Appendix 1), to ensure data triangulation and validity of the empirical evidence (Stake, 1995; Yin, 2014).

3.3 Data analysis

Data analysis used a reflexive and inductive process to allow categories to emerge from the data rather than relying on an a priori theoretical framework (Strauss and Corbin, 1998).

Applying a manual open coding process, the authors classified data through a multiple-step process. In the first step, transcripts were studied independently by each author, and data were initially grouped into codes for each interview. In the second step, emerging code categories were compared and iteratively agreed through multiple readings and interpretations.

In the third step, similarities and differences across departments and scientific fields (i.e. hard sciences vs social sciences) were explored and overarching issues identified. The emerging issues were triangulated with empirical data, which included university’s financial reports (2008-2018) and publications data (2009-2017). To ensure privacy, the authors dis-associated names from responses during the coding process (Creswell, 2014). A coding matrix (Appendix 2) was used to code emerging categories and themes, which have been summarised in a concept map (Wheeldon and Ahlberg, 2012) to provide a visual representation of findings of this study (Figure 2).

The following subsection briefly analyses the context of performance management in Italian universities.

4. Background: the external pressures

Public universities in Italy are governed by a system of professional bureaucracy in which academics take on temporary management positions within governing bodies following their election by peers. Only administrative activities (e.g. procurement, staff hiring, etc.) remain the responsibility of public managers, with a director general in charge of operations. Moreover, academics in governing bodies hold decision-making power in the design of PMS and the university strategy.

As in many other countries and public sectors worldwide, in recent decades, the Italian HE system has been subjected to NPM reforms aimed at introducing management-by-results and PMS (Rebora and Turri, 2013). Although Italian HE has relied on performance management since the 1990s, the pressure for increased transparency and competition has grown since the early 2000s through the involvement of both the organisational and the academic side of universities. In particular, since 2011, Law 240/2010 has introduced a research assessment framework for universities (VQR) managed by the Ministry of Education, University and Research through the National Agency for the Evaluation of Universities and Research Institutes (ANVUR). To date, there have been two VQR rounds (2004-2010 and 2011-2014), with the third expected to start in 2020. Highly reputed professors have seats in the governance of ANVUR and are involved in setting up performance assessment criteria and metrics.

In particular, quality of research is assessed and compared on a discipline-by-discipline basis through a process of peer review, under the guidance of scientific panels for research areas (GEV) made up of professors. The quality of research outputs (e.g. mainly publications) is considered only in terms of academic relevance and standards, while the impact beyond academia is not considered. The evaluation criteria are bibliometric (i.e. number of citations), not bibliometric (i.e. peer review) or mixed, depending on the panel/discipline. With regard to journal articles, each panel/discipline has its own list of journals proposed by each academic association and approved by ANVUR, ranked on the basis of the journal impact factor or the relevance for the field.

In the last VQR, universities were also assessed on their dissemination and public engagement activities (the so-called “third mission”); however, to date, performance metrics have not been established definitively. In parallel with the assessment of research and public engagement activities, ANVUR also managed a peer review-based quality assurance (QA) system under which overall services and operations of universities, their research environments and degree programmes are externally audited to determine whether quality standards are achieved. Under this system, universities are required by law to carry out self-assessment procedures and set up specific organisational units and committees involved in QA activities.

Since 2014, the legislation has introduced a PBF, which currently accounts for about 28 per cent of a €7bn central government transfer to state universities (FFO-“Fondo di Finanziamento Ordinario”); the remaining amount is allocated on an expenditure (50 per cent) and FTE student basis (22 per cent). Performance in the PBF is measured according to the VQR research scores of each university (including the research performance of new hired professors), and teaching performance is measured by the number of regular students and by student mobility indicators.

An important feature of the resource allocation system in Italian public HE is the budget authorisation for faculty hiring provided by the central government. This system is based on the allocation of full-time equivalents (FTE) of faculty and administrative staff members that each university is allowed to hire in the fiscal year on the basis of its financial performance, including funds achieved through VQR research performance (Francesconi and Guarini, 2018).

Within this funding scheme, since 2018, a specific fund of €1.355bn has been allocated on a competitive basis to 180 high-performing university departments in the VQR (the so-called “departments of excellence”) to support their five-year development plans, including research infrastructure development and hiring activities.

Quality accreditation is formally granted by ANVUR and state grants accounts for approximately 60-70 per cent of university revenues; therefore, emphasis on the QA and VQR research performance of departments exerts considerable pressure on university governing bodies. Although Italian universities are also free to decide on student numbers, they are required to follow specific faculty FTEs established by the government for each degree/programme, which must also be submitted to ANVUR for prior accreditation.

Furthermore, the career of academic professors is strongly driven by their research performance. In particular, since 2013, the access to tenured professorship positions (associate/full professor) in academia has been subject to a formal procedure of national qualification. This procedure is managed directly by ANVUR every two years through a random-selected committee of professors in each scientific discipline. Access to this procedure is based on a standardised and quantitative pre-set level of publications, and the evaluation mainly considers research performance. At the university level, hiring and career progression of academics is also subject to a supplemental public competitive recruitment procedure that is closely tied to the ability of candidates to meet quantitative and qualitative publication targets.

5. Findings and discussion

With the support of the concept map (Figure 2), this section presents and discusses the findings of this study. The first subsection analyses the features of PMS and the process of implementation within the university as a response to external pressures in the environment. The subsequent subsections discuss the way interviewees describe their views and reactions and the extent to which their behaviours are related to PMS. Before discussing the findings, let us first reiterate that organisational and individual behaviours emerge in relation to the academic logics (Thornton and Ocasio, 2008) and the effects of external and internal pressures (Figure 1). The ways in which academic logics shape and are shaped by the implementation of PMS are discussed in connection with the individual responses of interviewees (Thornton and Ocasio, 2008). The study’s findings show that discipline-specific research traditions are playing an important role in influencing individual academic responses and, in turn, are shaped by those responses. Differences are related to the types of research outputs that are most valued (i.e. journal articles vs monographs) and, more broadly, to the way academic research is conducted in the hard sciences and the social sciences.

It seems that in the setting analysed, two kinds of academic logics are in place. The first one, which is held mainly by members of HSc departments, emerges as the prevailing logic, because it has influenced the design of PMS in a way that is aligned with its academic norms and rules (e.g. bibliometric evaluation, publication by means of journal articles, etc.); the second, which is characterised by a low orientation towards the quantitative measurement of research performance, and is typical of SoSc departments, is succumbing to the first academic logic. The response of individual academics to PMS is also explained not only in relation to their affiliation to specific disciplines (i.e. HSc or SoSc), but also with regard to their career expectations. For academics in HSc departments, it is observed that their personal ambitions for career progression act as a trigger for behaviours that reinforce the typical norms and rules of their academic logics.

5.1 Organisational response: features of performance measurement systems

Since the university’s establishment, its governance body has placed strong emphasis on the quality of research and teaching, at both at the departmental and individual level; however, the departments themselves have essentially been left to decide what to do and how to do it within a traditional logic of autonomy. In the past, the assessment of professors’ performance has been solely related to the specific values and rules of each discipline, and the allocation of resources to departments has been managed collegially on a democratic basis, but with the rector (who had a scientific background in HSc) exerting a strong influence on decision-making. Traditionally, however, management and governance at the university has been conducted in a collegial manner with the involvement of professors in decision-making at different levels, which contributed to the recognition of the university as a good employer.

This situation started to change in 2013 when a new rector (with a scientific background in HSc) assumed office along with a new team of pro-rectors. The new governance was particularly inclined towards externally funded research and incentivised academics to apply for and win research grants. In particular, an ad hoc research fund was established to sustain grant submissions that achieved good external evaluation but not enough to be funded.

The rector also presented a new vision of excellence based on performance, followed by a re-centralisation of departmental processes and the development of a PMS for assessing research and teaching activities.

Research performance measures started to gain importance in decision-making in 2013 following the publication of results of the first round of VQR 2004-2010, in which the university achieved a high ranking. A relevant external driver was the decision of the central government to start using VQR research performance data for allocation of resources to universities (PBF). After this decision was announced, there was a considerable emphasis from the university’s governance body on the rearrangement of internal research management control systems and incentives for academics to be successful in VQR. It seems that here, the alignment between NPM’s external pressures and the dominant values of academics sitting in the governing bodies has favoured the inclusion of control mechanisms and performance metrics in the academic logic. The new pro-rector for research, a highly ranked and internationally well-known scientist, led the introduction of a new PMS for faculty staff allocation to departments that was applied in 2014 and is still in use at time of writing. The allocation was based on an internal ranking of departments resulting from a performance formula that precisely replicated the research and the teaching indicators considered in the PBF, but gave higher weight to the former. In particular, 50 per cent of resources were proportionally distributed on the basis of departments’ VQR research scores, 30 per cent on the basis of indicators of teaching efficiency (number of regular students, completion rate and students’ international mobility) and 20 per cent on turnover needs of departments. Because national-level VQR scores are only renewed every five years, in practice, for five consecutive academic years, departments that had achieved high VQR scores in the most recent national exercise inevitably attracted more professors and postdoctoral research positions, until new national level VQR research scores became available. Other perspectives related to teaching outcomes, department’s research and teaching needs and objectives, or third-mission activities are not considered. The new rules contributed to shape the academic logic towards placing an increasing value on research performance.

In parallel with the measurement of research activities, the university implemented a QA system for teaching activities based on output performance indicators required by ANVUR. The university’s governing bodies placed strong emphasis on measuring customer satisfaction for teaching and, since 2017, a mandatory minimum threshold of teaching performance score has been required for the triennial salary increase of tenured professors. Student satisfaction data are publicly disclosed on the university website at departmental and course level. However, as it will be discussed in the following sections, it appeared that teaching performance metrics were not perceived by professors to be strong incentivising mechanisms, because these were neither considered for departmental resource allocation nor for career advancement decisions. Table II summarises the features of the PMS identified in the case.

The design of PMS within the university has been developed by its governing bodies in a way that it recreated within the organisation the same external factors, i.e., – the same measurement systems implemented by the Italian government, that it sought to deal with. This approach was not only the result of adaptive isomorphism (DiMaggio and Powell, 1983; Parker, 2011) to external pressures emerging from funding mechanisms, but also a result of the specific prevailing academic logic and the way academic power was exerted by professors who held positions in the governance bodies.

The process of isomorphic adaptation took place over a number of different steps. VQR results and the related PBF created disturbances to the established internal value system of the university. To cope with these changes and maintain its financial position in this environment, the governing bodies had an incentive (coercive isomorphism) to develop an internal PMS by linking the research performance, which they wished to control, to the same metrics used by the external VQR and performance-based funding. The intention of the new system was to align department goals with the university’s priority of succeeding in VQR grant competition, as a response to external pressures. Because Italian universities cannot increase tuition fees, changes in the size of research block grants can have significant effects on the budget of any university. It should not be surprising that the university sought to align the two systems because in designing performance management systems, organisations take into account the key factors relevant to their success (Ferreira and Otley, 2009; Johanson et al., 2019).

The following extract from the university strategic plan provides evidence of how the PMS is shaped by the external environment as a condition for survival and achievement of the university mission:

The internal allocation of resources is based on performance and transparency. We follow the principle of accountability so that public money achieved by the university for research performance will be re-allocated to departments according to their results. This approach will strengthen the virtuous cycle between scientific productivity, reward, and reinvestment in research capabilities to be successful in grant competition. On the basis of this premise, we aim to develop new interdisciplinary research to create value for research and teaching innovation (University Strategic Plan 2018-2020).

In such a way, the external performance assessment mechanism of VQR was internalised into the university PMS by creating a system of financial incentives and rewards targeted at departments. The internal ranking of departments for the allocation of FTE faculty staff became so a calculative mechanism at the disposal of the rector for legitimising the new set of values (i.e. to be successful in research and funding), thus facilitating the incorporation of some aspects of the business logic into the academic logic.

In 2017 the university was again ranked among the top universities in Italy in several disciplines according to the VQR 2011-2014, which reinforced the perception held by the rector that giving greater emphasis to research performance metrics in faculty-related decisions was the right strategy. Again, the PMS provided some illusion of objectivity for the governing bodies and made the management of performance easy to control:

Our performance measurement system is designed and developed collectively and is grounded on evidence. For example, the assessment of scientific performance follows the criteria of VQR and, as such, stated by an external body at national level (University Strategic Plan 2018-2020).

Moreover, it seems that the exercising of power may also be implicated in this process (Hoffman, 2011). Leading actors in the university may try to impose their personal vision of academic work by claiming legitimacy for the PMS based on its relationship with the external environment (Moll and Hoque, 2011):

Our PMS is designed along the metrics used in PBF. Part of the state grant achieved on an FTE cost per student basis is useful to fund operations related to the number of students, but it is not performance-based. We are a research university, so we can’t allocate FTE faculty hiring according to the number of students in each department. Our teaching activity is driven by research (Authors interview, Francesconi and Guarini, 2018).

Power may also be instrumental for specific academic groups sitting in governing bodies to take advantage of the PMS to obtain more resources. Academics in governance roles hold a managerial responsibility and as such, their institutional decisions are naturally influenced by their own academic logics. This kind of influence seems to have occurred at the university in decisions related to the design of the PMS.

It is worth noting that the eight interviewed department heads declared that they were not involved in the design of the formula-based mechanism for resource allocation. The analysis of the board meeting minutes reveals that the decision about the higher weight to be assigned to research metrics in the formula was driven by the rector and other professors from hard sciences backgrounds sitting on the board, and no official discussion about the effectiveness of such system was reported. This issue also underlines how external pressures have been incorporated into the academic logics and how the process has been influenced by the power held by key actors.

Moreover, most of the university’s revenues are related to the number of students, rather than being money coming from research performance (Figure 3), which means that teaching is now cross sub-subsidising research at the university.

University’s turnover was €235.3m in 2018, made up of €90.4m (38 per cent) from the government block-grant, €44.7m (19 per cent) in research income (VQR and other competitive grants), €79.7m from teaching (34 per cent), with about €34.1m of this teaching income funded by the Italian government (FTE cost per student grant), and €20.5m is from miscellaneous other sources (9%) (authors’ calculation based on university financial report, 2018).

Likewise, about 39 per cent of FTE faculty hiring authorisation awarded by the central government to the university is driven by the number of students enrolled (i.e. tuition fees and the state grant related to FTE cost per student), whereas research performance-related revenues count for about 13 per cent of government authorisation (authors’ calculation based on central government data).

Hence, in contrast with previous literature that has argued that the emphasis on performance measurement within universities was an effect of NPM logic and tools adopted by university managers (Neumann and Guthrie, 2002; Parker, 2011), in this case, there is evidence that such emphasis may occur even when some professors – those who are members of the university governance bodies (e.g. head of departments, and academic staff representatives) are fully in charge of the design of the PMS. Previous studies (Kurunmäki, 2004) have suggested that in organisational contexts characterised by the dominance of professionals, competencies in the design of PMS can also be developed by non-accountants (Orton and Weick, 1990). This was not the case here because professors involved in the university’s governance were not involved in any learning initiative aimed at developing competences associated with university PMS.

5.2 Individual responses to performance measurement systems

Previous research (Kallio et al., 2016) has underlined the need to understand how academics react to and make sense of the shift caused by managerialism and PMS within HE. The following sub-sections analyse and discuss the respondents’ explanations of how they responded to external pressures and the internal demands exerted by PMS and the extent to which their behaviour was related to PMS.

The way in which academics in the present study have dealt with institutional pressures seems to have been guided by their individual career expectations. Indeed, “individuals plan and prioritize different academic activities to reach the maximum output” (Grossi et al., 2019a, p. 16). Interviewees highlighted the importance of career advancement in driving their research choices, with differences noted between HSc and SoSc faculties:

In my field of research there are very few top-ranked journals. I tried to publish many of my research outputs in these journals. In my opinion other journals would have been more relevant but I had to choose a top-ranked journal. Maybe when I’ll become a full professor, I will change my behaviour and avoid these “quantitative cages”. I use the term “cage” because the metrics are conditioning me in all my choices (male associate professor – SoSc).

For our career promotion, we have to demonstrate a number of articles ranging from 6 to 11, depending on the discipline, and a minimum number of citations (female assistant professor – HSc).

The relevance of PMS was observed by all faculty members who claimed to have familiarity with the system and a good knowledge of the way performance measures are used within the university with regard to individual research performance requirements.

Although interviewees recognised that the emphasis on performance measurement has greatly increased the transparency and objectivity of academic performance considerations in career advancement, as opposed to practices of nepotism, they have also expressed criticism of procedures and methods (e.g. use of journal rankings). Interviewees emphasised that the new PMS is affecting their academic priorities and actions with different intensities and nuances, depending on their specific traditions of their discipline and their individual career expectations. Because the university incorporated into its PMS the performance metrics used for VQR and funding assessment, it was difficult to distinguish between individual responses driven by external quantitative metrics and those driven by internal organisational demands, which sometimes appeared to have been overlapping.

Three different types of individual reactions emerged from this analysis. The first type of reaction, which has been labelled as “detachment”, has been reported by two interviewees who claimed a lack of interest in changing their behaviour as a consequence of new performance metrics. They described their motivation in terms of their inherent notion of academic work, or in terms of criticism of the current research assessment systems:

I think that managing operations as a coordinator of a degree programme, as well as finding solutions to everyday problems, is for me more important than QA performance metrics per se (female full professor – SoSc).

[…] I do exactly the job I have to do, unfortunately I did not get the qualification to become a full professor. I don’t trust VQR and its research performance metrics. Publications are valued with different metrics in VQR and in the national qualification assessment (male associate professor – HSc department).

The second type of reaction has been categorised as “business as usual”, identified in seven interviewees from HSc departments who did not report any special change in their research behaviour. Consistent with previous research (Reale and Seeber, 2011), this reaction was associated by the respondents with the fact that they were already acquainted with the use of quantitative measures in research evaluation, or because their performance was rated excellent in VQR and the university’s internal ranking for resource allocation. It therefore seems that PMS implemented by the university were consistent with their academic logic and did not require any relevant modification to their behaviour.

Third, several other academics, from both HSc (six) and SoSc (seventeen), reported a radical or partial change in their ways of doing research and of prioritising academic activities as “a consequence of VQR and PMS implemented by the university”. This reaction has been categorised as “epistemic reorientation” because of the nature of the influences that quantitative metrics are exerting on the academic practices of knowledge production. The features of this response are described hereinafter.

Comments made by interviewees and analysis of data were compared to check whether there was any difference between different disciplines/departments. This analysis revealed that there were consistent patterns in responses made by interviewees from HSc and SoSc working in departments with different levels of research performance. Indeed, interviewees that reported a radical reorientation of their behaviour (twenty-three) were primarily from SoSc-“not excellent” departments (ten) with low scores in both the VQRs and the university’s internal ranking, and from HSc-“not excellent” departments (three). “Reorientation” also characterised interviewees from “excellent” SoSc (seven) and HSc (three) departments, especially assistant and associate professors with career expectations (seven). In line with the proposed theoretical framework (Figure 1), it could not be excluded that reorientation leads to changes in the individual’s academic logic which, in turn, further shape individual responses to become “business as usual” behaviour.

Although it is not possible to identify causality using the current research design, these findings suggest that PMS have to some extent re-oriented academics and departments as a consequence of those individuals’ efforts to align their performance with the new metrics, whilst attempting to maintain access to resources and maximise individual career outcomes. This is not surprising because the PMS aims to re-orientate individual behaviours and the analysed case has been selected because of the strong importance allocated to research assessment, which also contributes to the explanation of why the most predominant reaction to the changes introduced by the PMS is represented by reorientation.

Previous research by Sutton and Brown (2016, p. 595) has underlined that in resource-constrained organisations, such as universities, incentive systems that are based on research work-related rewards (e.g. increased research funding and staffing, career advancement) are very effective in driving academic behaviour towards undertaking greater volumes of research-related activities. In the analysed case, the strong linkage between VQR research performance metrics and departmental resource allocation, combined with research performance requirements for career progression, have created a “closed incentive system” that values only individual research performance. In such a system, performance metrics become an instrumental device for valuing time allocation to research, teaching and third-mission activities. Although this incentive scheme may be rewarding for highly motivated researchers, it may surreptitiously alter the commonly accepted notions of academic work and the university mission. The following quotes are indicative of this change in academic logic among academics when deciding priorities in their academic agenda:

In the past I spent a lot of time supervising students, but now I have to focus my efforts on research […] I had to study a lot to change my research subject, so I had to reduce teaching activities (female assistant professor – HSc).

I am fully involved in many activities; teaching is the last of my thoughts, unfortunately. I’m not satisfied, but on the other hand I cannot dedicate more time (female full professor – SoSc).

Now, it is more difficult to find colleagues willing to collaborate in teaching activities as these are not very valued in internal resource allocation (female full professor – HSc).

The current teaching performance indicators are mainly related to outputs and do not appear to be useful for evaluating the effects of this reorientation in attitude. When respondents were asked about departmental demand for their individual performance, all of them mentioned only VQR and research productivity metrics. This was particularly underlined by academics from departments in low positions in the internal FTE faculty allocation ranking, suggesting awareness of the increasing internal pressures related to the use of research metrics in university decisions. VQR aims to assess the research quality of departments and universities, rather than of individuals. However, because VQR metrics have been institutionalised within the university as a basis for resource allocation, these measures also affect the behaviour of individual academics.

Teaching performance was mainly assessed based on student satisfaction rates and the target did not appear to be challenging, nor one with many consequences if it was not met. In this case, use of rhetoric prevails over substance. It is worth emphasising here that although this metric was included in the salary advancement rules for faculty members, the required threshold was not challenging for academic staff to meet. Moreover, there is not yet any formal linkage between individual teaching performance and career promotion metrics:

Teaching performance is not very relevant either at departmental or university level. Above all, nobody cares if you get low performance scores (female associate professor – SoSc).

I would say that we are perhaps more accountable for research, because there is no real accountability on the level of teaching at the department level. Yes, we have measures of student satisfaction for each course, but you are required to take action only if you get low scores (male full professor – SoSc).

We have learned the hard way that VQR’s metrics do matter. My department got a low performance score in the last VQR so that now we can’t hire new staff because of the University’s resource allocation system. I had to raise awareness of the importance of dealing with research performance metrics above all. Now professors have started to be aware, also because their results impact the entire department (female department head – SoSc).

Similar comments were made regarding third-mission activities, with interviewees stating that these are not valued for either career advancement or performance funding, and performance metrics are still under development.

VQR research metrics also seem to have been altering the process of knowledge production in terms of the types of publications produced and how research is conducted. Changes in publication behaviours have been identified by interviewees from SoSc – either “excellent” or “not excellent” departments – who have reported the abandonment of publishing books in favour of journal articles. These changes have been attributed to the assessment criteria adopted by VQR, which conflicted with their discipline’s evaluation traditions:

I focus more on ranked journals. This seems to me a nonsense. In the past, books were well considered. Now, a research monograph is valued less than an article published in a journal. Writing a good book takes a lot of time. In the past this effort was valued; now it is not (male assistant professor– SoSc).

There is also evidence that these metrics are influencing the selection of which journals to publish in of academics, an issue that has also emerged in other contexts (Gray et al., 2002; Parker et al., 1998; Parker and Guthrie, 2005; Saravanamuthu and Tinker, 2002). For example, the choice of publishing outlet is influenced by journal rankings and related metrics:

My publishing choices are influenced by journal ranking. Sometimes I choose well-known open access journals if I need to publish quickly (male assistant professor – HSc).

[….] there is an obsessive attention on where to publish (female full professor – SoSc).

In my research group, we look for the most cited journals; it’s like a business. In the past, you got a research result and just wanted to publish it, now you start with a publication strategy (male full professor – HSc).

Because academics have an incentive to choose topics that maximise their research performance indicators (Martin, 2001), the use of metrics has important epistemic consequences for the planning of research projects:

It is very likely that researchers are oriented to follow more fashionable research topics, i.e., more fertile research streams in terms of publications (male associate professor – HSc).

In our field there are fashionable topics which you can rely on in order to publish in top-ranked journals (female full professor – SoSc).

I had to switch my previous research topic because it was not in the mainstream. Now I’m fine, I managed to get in touch with colleagues I work well with, but I know other academics who have not been as successful as I was (female assistant professor – HSc).

Again, it seems that research performance metrics also bias the dynamic of research collaborations, in that early-career scholars have an incentive to select only research groups and topics that maximise the chance to publish and obtain research funds, to ensure career progression. The current research requirements for career development in Italy, for which publishing in top-ranked international journals is pivotal for certain disciplines, is acting as an important coercive isomorphism factor influencing individual academic responses:

Strong research groups always tend to become stronger and stronger and influence the career of young scholars. My research group is small and led by an assistant professor, so it is more difficult to proceed with career advancement (male assistant professor – HSc).

Research groups are also important. Small research groups have been marginalised. If you do research in a strong group of colleagues, then your research productivity improves a lot (male assistant professor – HSc).

In the past I was not in a strong research group and so I’m still an assistant professor (male assistant professor – HSc).

We must also consider that it is very important to accumulate citations in a short time. The more people participate in the work, the more quotes you can get (female assistant professor – HSc).

Metrics are also very important for funding; you do not get funds if you are not well placed in a group, you have to jump on the “wagon of the winners” to be able to publish well and get funded (female assistant professor – HSc).

Despite the fact that choosing journals by looking only at rankings is believed to be counterproductive to the advancement of knowledge and also somewhat detrimental to knowledge dissemination (“the third mission”), metrics lead such decisions regardless of the selection bias. The following statements illustrate this contradiction:

In my field of research there are very few top-ranked journals. I tried to publish many of my research outputs in these journals. In my opinion other journals would have been more relevant but I had to choose a top-ranked journal. Maybe when I become a full professor, I will change my behaviour and avoid these “quantitative cages”. I use the term “cage” because the metrics are conditioning me in all my choices (male associate professor – SoSc).

Educating future leaders is our most important responsibility. As a researcher I have to publish in top-ranked journals. I do not publish any more in any other journals, so I don’t do science, I don’t create culture. Here is the paradox: we create less culture; culture is produced independently of the fact that I publish in top-ranked journals. I can develop culture by writing in a newspaper, doing dissemination, making conferences, seminars, etc. (male department head – SoSc).

Another argument raised by interviewees from SoSc departments is that the higher scores attributed to high-impact international journals in the VQR have led to the progressive abandonment of research topics considered more relevant at the national, rather than international, level, as well as to the downfall of important Italian journals:

[…] there has been an impoverishment of research activity useful to our industries and banks in the national context. We are focused only on international mainstream research topics (female full professor – SoSc).

We do not publish any more research in Italian. We only publish for international English-language journals. Nothing else (male associate professor – SoSc).

Each discipline has different research peculiarities. The main issue in our field [banking and finance] is that we have a very limited number of top-ranked journals in comparison with other management fields. Moreover, it has become increasingly difficult to publish in these few top journals, whereas good Italian journals are low-ranked according to the H-index (male associate professor – SoSc).

This type of “reorientation” seems also to be encouraged by SoSc department heads, who have reported that they are allocating department funds for proof editing services in order to support publications in English-language journals. This is a clear example of how certain social and normative values embedded in PMS change actors’ organisational routines and, at the same time, reinforce the prevailing institutional logic.

Furthermore, reorientation was as a response to the demand for increased research productivity, which was reported by academics either from HSc or SoSc “not excellent” departments (Figure 4). Consistent with previous studies (Creswell, 1985; Creamer, 1998; Edgar and Geare, 2013; Gendron, 2008; Leech et al., 2015; Martin-Sardesai and Guthrie, 2018), reorientation has resulted in an increased quantity of publications:

The existence of a research evaluation system pushes you to work harder. Since the VQR has been implemented, I have set quantitative targets that have led me to publish more (male full professor – SoSc).

My research productivity has increased in recent years, compared to what it was in the past years. I well remember that some years ago there were tenured colleagues who did not publish at all (female full professor – SoSc).

Imagine, for example, that we have data for a very significant research work which is very risky and uncertain in terms of results or two medium-quality works that are fast and easy to publish, what should we do? We decide on two medium-quality works (female full professor – HSc).

Finally, what emerged from the interviews is that PMS have affected faculties from social sciences more significantly than those from hard sciences, who appeared more at ease with the system. This also shows that the academic logic, although characterised by common values (such as collegiality), bears different nuances in different fields of research.

6. Conclusion

The role of HE institutions has changed over time. In recent decades, globalisation trends and NPM reforms have required a shift from an internally oriented PMS towards explicit, externally oriented systems. Research performance metrics, teaching evaluations and quality reviews are now common PMS introduced by governments in several countries to monitor universities and departments and make them more accountable (Agyemang and Broadbent, 2015; Parker, 2011). This study investigates how PMS have been implemented within an Italian public university in its attempt to respond to this changing environment. It describes the impact that government policies actually had on the design of PMS and on individual academic behaviours according to the views of the academics interviewed. The first research question of this study was to investigate how external pressures for performance measurement in HE can be translated into organisational and individual academic response in a university setting. The organisational action in this case seems to conform to the external institutional pressures to obtain legitimacy, but there are some variations in the way individuals perceive these pressures. At the organisational level, the response of the university was to adopt PMS tools that replicated the metric system used by the government to assess and reward the performance of universities. This process was facilitated by the fact that government control over universities was strongly based on research performance and aligned with the academic logic held by those professors sitting in governing bodies. The external pressures therefore acted therefore as a catalyst; when a group of professors carrying a specific academic logic is in charge of a university’s governing bodies, it can take advantage of the position to successfully implement managerial tools (PMS in the present case) that reinforce its orientation. The academic and the business logics are often conflicting in nature according to the literature (Grossi et al., 2019a), but, in the analysed case, these have mutually reinforced each other and have contributed to the affirmation of specific rules and routines. Further research is needed to confirm or refute these findings in other national and international contexts.

At the individual level, three different types of response from academics to PMS emerged from this analysis: detachment, business as usual and reorientation.

The findings suggest that PMS have to some extent successfully re-oriented academics and departments in their efforts to align performance with new metrics, attempt to maintain access to resources, and maximise individual career needs. Different disciplines’ research traditions – hard sciences vs social sciences – and the related notions of academic work seem to have influenced the individual academic response. First, reorientation was identified in higher numbers in departments that were not well ranked (“not-excellent”) and in faculties of social sciences not accustomed to the new logic of quantitative evaluation. Second, reorientation was associated with several behavioural attitudes already reported by previous studies such as increased research productivity (Creswell, 1985; Creamer, 1998; Edgar and Geare, 2013; Gendron, 2008; Leech et al., 2015; Martin-Sardesai and Guthrie, 2018), and greater emphasis on outputs and outlets of publications (Gray et al., 2002; Parker et al., 1998; Parker and Guthrie, 2005; Saravanamuthu and Tinker, 2002). In particular, interviewees’ responses pointed our attention to certain behaviours stimulated by metrics used for research assessment that might have important epistemic consequences for the process of knowledge creation and may also alter notions of academic work. Performance measures have introduced competition and uncertainty into this organisational environment and these factors have influenced academics, especially those with career expectations. Academics characterised by “reorientation” reported that they have started selecting research topics and groups for which publication and funding chances are maximised. This behaviour has been observed in respondents from both the hard and social sciences. In particular, reorientation in HSc respondents was related to the need to achieve their full career potential. The reorientation, in this case, seems to have acted in a way that reinforced, rather than altered, the values and rules of the discipline’s traditions.

Academics from social sciences have conversely claimed an alteration in their publication tradition, this resulting in the abandonment of writing of books and book chapters in favour of journal articles, published specifically in international journals.

This epistemic reorientation involves two other significant unintended consequences. The first is that the pressure on research performance is reducing the focus and commitment of academics to teaching and students, as well as to third-mission activities. The second consequence, which seems to characterise academics from the social sciences, concerns the increasing irrelevance of research topics close to the interest of domestic stakeholders, because this research may not be of interest to international journals or because domestic journals are not well ranked.

This phenomenon can result in significant modification to the relationships between universities and local actors and communities, an issue that seems to us important to future research in non-English-speaking countries.

RQ2 asked about how the features of PMS may condition individual responses. The effects of performance measurement in university settings have been long debated and analysed from several perspectives. In particular, studies have investigated the impact of academic culture on PMS and the unintended effects of PMS (Grossi et al., 2019a). Within this field of research, the present study contributes to the existing literature by clarifying how these effects may be associated with the way in which the organisation has designed and operated its internal PMS as a response to institutional and environmental pressures.

Within this perspective, two further findings are of relevance. The first is that PMS implemented within the university was contingent on the same metrics of quality of research that were embedded in external government-led research performance assessment and funding models. This approach generated a system of control in which such metrics were used for creating an internal ranking of departments and successful units were rewarded by allocating them additional faculty staff. In the attempt to protect its mission from external disturbances (i.e. to be successful in national PBF awards), the university internalised the same logic of external output control and pressure that it was subject to itself. This adaptation inevitably emphasised the value of research at the expense of teaching at both the departmental and individual levels, because quality of research measures were the primary measures used for resource allocation and career advancement. Specifically, on this issue, the present study provides a counter-example to the argument of NPM-critical studies, in that dysfunctional effects of research performance measures in universities can occur even when some academics (who are members of the university governing bodies), rather than administrative bureaucrats, take full responsibility for the design of the PMS.

The second relevant finding is that the university’s internalisation of external performance management systems can be a matter of power (i.e. PMS imposed by the new rector and the academics in the governing bodies) and so can be instrumental to powerful academic groups in promoting their notions of academic work or taking advantage of the system while claiming the legitimacy of internal PMS in relation to environmental pressures. Once again, the particular academic logics and research traditions of specific disciplines (Kallio et al., 2016) played an important role in influencing the way the PMS was designed and implemented.

So, it seems that the relationship between institutions (PMS in the present case) and individual actions (in the present case, the answers of the interviewees) resulted in increased prestige of individual actors/groups in departments judged to have achieved research excellence (i.e. they were rewarded with additional funds and faculty staff) and with an institutional logic (Friedland and Alford, 1991) that gives more value to research in comparison to other university activities.

In the case analysed, data indicate that academics respond differently to the same set of system requirements, based on their academic logic, i.e. how they view the academic work and what their particular internal drivers are. Future research could further examine the relationship between individuals’ values and expectations and their responses to competing institutional pressures, and whether the present study’s findings apply to a wider range of universities. This brings the focus of research back to individuals.

A question arising from this study that is worth exploration in future research is how PMS should be designed in a university setting to reduce or prevent dysfunctional effects on individual academics. Moreover, because performance metrics used for research assessment and allocation of funding affect universities in their design of internal PMS, an important issue relates to the governance of performance measurement, both at the HE and organisation levels. This theme highlights the importance of policy makers and university leaders clarifying the role of contemporary public-funded universities/departments in society, the stakeholders to whom these institutions have a responsibility, and how these roles and responsibilities should be fulfilled. In addition, a clarification is needed regarding whether there should be uniformity or diversity in the institutional mission and, above all, how this diversity should be measured and rewarded.

Further research could investigate, for example, which stakeholders should be involved in the design of PMS, the roles a played by different disciplines and the transparency of the process itself. The issue of academic power seems to be relevant in this type of inquiry considering that in certain contexts – such as in Italy – academics might play a dual role as both key leading actors in the design of PMS within their universities and as representatives of their academic group in the research assessment at the national level.

The study has some limitations that should be considered when interpreting the findings. It does not analyse the actual behaviour of academics but rather the way they describe their perceptions of the impact of PMS. Future studies should also consider potential heterogeneity within each hard and social science discipline, as well as heterogeneity of departmental and individual academics’ performance in research, teaching, and outreach activities. Finally, this study is confined to a single university case and context, and the results cannot hence be generalised to other universities in Italy or abroad. Thus, further studies conducted in different settings would be valuable to validate the findings of this study.

Figures

Theoretical framework

Figure 1.

Theoretical framework

Individual response to PMS: themes and emerging categories from interviews

Figure 2.

Individual response to PMS: themes and emerging categories from interviews

Business model’s income of the university, 2008-2018 (millions of euro)

Figure 3.

Business model’s income of the university, 2008-2018 (millions of euro)

Research productivity trends of the university (output per tenured staff)

Figure 4.

Research productivity trends of the university (output per tenured staff)

Interviewees selection

Departments Scientific field Respondents and tenure
Excellent
SoSc1 Economics and applied economics 3 full, 4 associates
SoSc2 Sociology
HSc1 Biology and chemistry 4 full, 3 associates, 2 assistants
HSc2 Ecology, zoology, geology
Not excellent
SoSc3 Accounting, banking and finance, business law 6 full, 1 associate, 4 assistants
SoSc4 Statistics and economic applied statistics
HSc3 Informatics and computer sciences 1 full, 4 associates
HSc4 Theoretical and experimental physics

Notes: SoSc = social sciences; HSc = hard sciences

Organisational response: external pressures and features of PMS in the analysed case

External pressures Features of PMS at the university
Performance measurement Use Metrics
Research evaluation (VQR) Ranking of universities and departments in each discipline (since 2014) Research performance score
Journal ranking
VQR research performance score used for internal ranking of departments (since 2014)
PBF Allocation of block grant to universities (since 2014)
Allocation of FTE faculty to universities (since 2014)
VQR research performance score
Efficiency of teaching indicators (regular student rate, internationalisation rate)
Ranking of universities
(formula-based)
PBF metrics used for FTE faculty allocation to departments (formula-based) (since 2014)
Professorship qualification Assessment of individual research performance (since 2013) Peer-review opinion
Bibliometric indicators
Journal ranking
Professorship qualification metrics used for recruitment (since 2014)
QA Quality assessment of universities (since 2013) Quality standards
Output of teaching (number of regular students, completion rate, etc.)
Student satisfaction rate
Third-mission outputs
Student satisfaction rate used as a threshold for salary improvements (since 2017)
Monitoring of departmental and degree programme metrics (since 2013)

Themes Categories Example of interview quotes
Organisational response: features of PMS at the university level University strategy Our PMS is designed along the metrics used in PBF. Part of the state grant achieved on an FTE cost per student basis is useful to fund operations related to the number of students, but it is not performance-based. We are a research university, so we can’t allocate FTE faculty hiring according to the number of students in charge of each department. Our teaching activity is driven by research (Rector)
Research performance metrics It is not an easy task to identify reliable metrics for comparing the quality of very different scientific outputs. It is necessary to refine the metrics (male full professor – SoSc)
Internal ranking of departments We have learned the hard way that VQR’s metrics do matter. My department got a low performance score in the last VQR so that now we can’t hire new staff because of the University’s resource allocation system. I had to raise awareness of the importance of dealing with research performance metrics above all. Now professors have started to be aware, also because their results impact the entire department (female department head – SoSc)
External pressures Research assessment I focus more on ranked journals. This seems to me a nonsense. In the past, books were well considered. Now, a research monograph is valued less than an article published in a journal. Writing a good book takes a lot of time. In the past this effort was valued; now it is not (male assistant professor– SoSc)
Career rules and performance metrics The national qualification in my discipline is based on bibliometric indicators, so scholars have been forced to produce a few fashionable and important research projects to ‘slice up’ in several outputs, in which moreover you cite one another (female assistant professor – HSc)
Journal ranking My publishing choices are influenced by journal ranking. Sometimes I choose well-known open access journals if I need to publish quickly (male assistant professor – HSc)
Competitive funding We are urged to pay attention to bibliometric indexes for selecting the best journals for our research outputs in order to have a better chance to be rewarded and to get funds (male associate professor – HSc)
Research competition We must also consider that it is very important to accumulate citations in a short time. The more people participate in the work, the more quotes you can get (female assistant professor – HSc)
Individual academic responses to PMS Detachment I think that managing operations as a coordinator of a degree programme, as well as finding solutions to everyday problems is for me more important than QA performance metrics per se (female full professor – SoSc)
Business as usual It is important to identify the right outlet for scientific contributions, but it has always been important, it’s not new (female associate professor – SoSc)
Epistemic re-orientation In the past I spent a lot of time supervising students, but now I have to convey my efforts on research … I had to study a lot to change my research subject, so I had to reduce teaching activities (female assistant professor – HSc)
In my research group, we look for the most cited journals; it’s like a business. In the past, you got a research result and just wanted to publish it, now you start with a publication strategy (male full professor – HSc)
[…] there has been an impoverishment of research activity useful to our industries, banks in the national context. We are focused only on international mainstream research topics (female full professor – SoSc)
We do not publish any more research in Italian. We only publish for international English-language journals. Nothing else (male associate professor – SoSc)
The existence of a research evaluation system pushes you to work harder. Since the VQR has been implemented, I have set quantitative targets that have led me to publish more (male full professor – SoSc)

Appendix 1. Data sources

  • Allocation of annual FTE – Years 2008-2018, Ministry of Education, University and Research, www.miur.gov.it/;

  • University’s financial reports 2008-2018;

  • HE statistics: http://ustat.miur.it;

  • VQR research performance (VQR 2004-2010, 2011-14) https://www.anvur.it;

  • University administrative rules;

  • University strategic plan, Years 2015-2020;

  • University research output data, 2009-2017;

  • University annual allocation of FTE faculty; and

  • University research performance – internal department ranking, 2014-2018.

Appendix 2. Themes and emerging code categories from interviews

Table AI

References

Agyemang, G. and Broadbent, J. (2015), “Management control systems and research management in universities: an empirical and conceptual exploration”, Accounting, Auditing and Accountability Journal, Vol. 28 No. 7, pp. 1018-1046.

Ahrens, T. and Khalifa, R. (2015), “The impact of regulation on management control. Compliance as a strategic response to institutional logics of university accreditation”, Qualitative Research in Accounting and Management, Vol. 12 No. 2, pp. 106-126.

Alvesson, M. (2003), “Beyond neopositivists, romantics and localists: a reflexive approach to interviews in organizational research”, The Academy of Management Review, Vol. 28 No. 1, pp. 13-33.

Anderson, D. Johnson, R. and Saha, L. (2002), “Changes in academic work: implications for universities of the changing age distribution and work roles of academic staff”, Department of Education, Science and Training, available at: www.dest.gov.au/common_topics/publications_resources/All_Publications_AtoZ.htm (accessed 5 May 2008).

Anderson, G. (2006), “Carving out time and space in the managerial university”, Journal of Organizational Change Management, Vol. 19 No. 5, pp. 578-592.

Bastedo, M.N. (2009), “Convergent institutional logics in public higher education: state policy making and governing board activism”, The Review of Higher Education, Vol. 32 No. 2, pp. 209-234.

Bazeley, P. (2010), “Conceptualising research performance”, Studies in Higher Education, Vol. 35 No. 8, pp. 889-903.

Behn, D. and Kant, P.A. (1999), “Strategies for avoiding the pitfalls of performance contracting”, Public Productivity and Management Review, Vol. 22 No. 4, pp. 470-489.

Bellamy, S., Morley, C. and Watty, K. (2003), “Why business academics remain in Australian universities despite deteriorating working conditions and reduced job satisfaction: an intellectual puzzle”, Journal of Higher Education Policy and Management, Vol. 25 No. 1, pp. 13-28.

Borlaug, S.B. and Langfeldt, L. (2019), “One model fits all? How centres of excellence affect research organisation and practices in the humanities”, Studies in Higher Education, doi: 10.1080/03075079.2019.1615044

Brunsson, N. and Sahlin-Andersson, K. (2000), “Constructing organizations: the example of public sector reform”, Organization Studies, Vol. 20 No. 4, pp. 721-746.

Burns, J. and Scapens, R.W. (2000), “Conceptualizing management accounting change: an institutional framework”, Management Accounting Research, Vol. 11 No. 1, pp. 3-25.

Carmona, S., Ezzamel, M. and Gutierrez, F. (1998), “Towards an institutional analysis of accounting change in the royal tobacco factory of Seville”, Accounting Historians Journal, Vol. 25 No. 1, pp. 115-147.

Carruthers, B.G. (1995), “Accounting, ambiguity, and the new institutionalism”, Accounting, Organizations and Society, Vol. 20 No. 4, pp. 313-328.

Chandler, J. (2008), “Academics as professionals or managers? A textual analysis of interview data”, Qualitative Research in Accounting and Management, Vol. 5 No. 1, pp. 48-63.

Christopher, J. (2012), “Governance paradigms of public universities: an international comparative study”, Tertiary Education and Management, Vol. 18 No. 4, pp. 335-351.

Claeys-Kulik, A.L. and Estermann, T. (2015), Define Thematic Report: performance-Based Funding of Universities in Europe, European University Association, Brussels, Belgium.

Conrath-Hargreaves, A. and Wustemann, S. (2019), “Multiple institutional logics and their impact on accounting in higher education. The case of a German foundation university”, Accounting, Auditing and Accountability Journal, Vol. 32 No. 3, pp. 782-810.

Creamer, E.G. (1998), “Assessing faculty publication productivity: issues of equity”, ASHE-ERIC higher education report 26 No. 2, Association of the studies of Higher Education, Washington, DC.

Creswell, J.W. (1985), “Faculty research performance: lessons from the sciences and the social sciences”, ASHE-ERIC higher education report No. 4, Association of the studies of Higher Education, Washington, DC.

Creswell, J.W. (2014), Research Design, 4th ed., Sage, Thousand Oaks CA.

Czarniawska, B. and Genell, K. (2002), “Gone shopping? Universities on their way to the market”, Scandinavian Journal of Management, Vol. 18 No. 4, pp. 455-474.

De Bruijn, H. (2008), “Managing performance in the public sector”, Public Administration, Vol. 86 No. 3, pp. 863-865.

Dent, J.F. (1991), “Accounting and organizational cultures: a field study of the emergence of a new organizational reality”, Accounting, Organizations and Society, Vol. 16 No. 8, pp. 705-732.

DiMaggio, P.J. and Powell, W.W. (1983), “The iron cage revisited: institutional isomorphism in organizational fields”, American Sociological Review, Vol. 48 No. 2, pp. 147-160.

Dobija, D., Gorska, A.M., Grossi, G. and Strzelczyk, W. (2019), “Rational and symbolic uses of performance measurement: experiences form Polish universities”, Accounting, Auditing and Accountability Journal, doi: 10.1108/AAAJ-08-2017-3106.

Dougherty, K.J., Sosanya, M., Jones, H., Lahr, R.S., Natow, L.P. and Vikash, R. (2016), Performance Funding for Higher Education, Johns Hopkins University Press, Baltimore.

Edgar, F. and Geare, A. (2013), “Factors influencing university research performance”, Studies in Higher Education, Vol. 38 No. 5, pp. 774-792.

Ferreira, A. and Otley, D. (2009), “The design and use of performance management systems: an extended framework for analysis”, Management Accounting Research, Vol. 20 No. 4, pp. 263-282.

Fligstein, N. (1987), “The intraorganizational power struggle: rise of finance personnel to top leadership in large corporations 1969-1979”, American Sociological Review, Vol. 52 No. 1, pp. 44-58.

Fogarty, T.J. (1996), “The imagery and reality of peer review in the US: insights from institutional theory”, Accounting, Organizations and Society, Vol. 21 Nos 2/3, pp. 243-267.

Francesconi, A. and Guarini, E. (2018), “Performance-based funding and internal resource allocation: the case of Italian universities”, in Anessi Pessina, E., Bianchi, C. Borgonovi, E. (Eds), Outcome-Based Performance Management in the Public Sector, Springer International Publishing, Berlin, pp. 289- 306.

Friedland, R. and Alford, R.R. (1991), “Bringing society back in: symbols, practices, and institutional contradiction”, in Powell, W.W. and DiMaggio, P.J. (Eds), The New Institutionalism in Organizational Analysis, University of Chicago Press, Chicago. IL.

Gendron, Y. (2008), “Constituting the academic performer: the spectre of superficiality and stagnation in academia”, European Accounting Review, Vol. 17 No. 1, pp. 97-127.

Gray, R., Guthrie, J. and Parker, L. (2002), “Rites of passage and the self-immolation of academic accounting labour: an essay exploring exclusivity versus mutuality in accounting”, Accounting Forum, Vol. 26 No. 1, pp. 1-30.

Greenwood, R., Díaz, A.M., Li, S.X.,. and Lorente, J.C. (2010), “The multiplicity of institutional logics and the heterogeneity of organizational responses”, Organization Science, Vol. 21 No. 2, pp. 521-539.

Grenwood, R., Oliver, C., Suddaby, R. and Sahlin, K. (2008), The SAGE Handbook of Organizational Institutionalism, SAGE Publication, London.

Grossi, G., Dobija, D. and Strzelczyk, W. (2019a), “The impact of competing institutional pressures and logics on the use of performance measurement in hybrid universities”, Public Performance and Management Review, doi: 10.1080/15309576.2019.1684328.

Grossi, G., Kallio, K.M., Sargiacomo, M. and Skoog, M. (2019b), “Accounting, performance management systems and accountability changes in knowledge-intensive public organizations. A literature review and research agenda”, Accounting, Auditing and Accountability Journal, doi: 10.1108/AAAJ-02-2019-3869.

Hammarfelt, B. and Rushforth, A.D. (2017), “Indicators as judgement devices: an empirical study of citizen bibliometrics in research evaluation”, Research Evaluation, Vol. 26 No. 3, pp. 169-180.

Hines, R.D. (1988), “Financial accounting: in communicating reality, we construct reality”, Accounting Organizations and Society, Vol. 13 No. 3, pp. 251-261.

Hoffman, A.J. (2011), “Talking past each other? Cultural framing of skeptical and convinced logics in the climate change debate”, Organization and Environment, Vol. 24 No. 1, pp. 3-33.

Hopwood, A.G. (1987), “The archaeology of accounting systems”, AccountingOrganizations and Society, Vol. 12 No. 3, pp. 207-234.

Horta, H. and Santos, J.M. (2019), “Organisational factors and academic research agendas: an analysis of academics in the social sciences”, Studies in Higher Education doi: 10.1080/03075079.2019.1612351.

Hyvonen, T., JäRvinen, J., Pellinen, J. and Rahko, T. (2009), “Institutional logics, ICT and stability of management accounting”, European Accounting Review, Vol. 18 No. 2, pp. 241-275.

Jarvinen, J. (2006), “Institutional pressures for adopting new cost accounting systems in Finnish hospitals: two longitudinal case studies”, Financial Accountability and Management, Vol. 22 No. 1, pp. 21-46.

Johanson, U., Almqvist, R. and Skoog, M. (2019), “A conceptual framework for integrated performance management systems”, Journal of Public Budgeting, Accounting and Financial Management, Vol. 31 No. 3, pp. 309-324, doi: 10.1108/JPBAFM-01-2019-0007.

Kallio, K.M. and Kallio, T.J. (2014), “Management-by-results and performance measurement in universities – implications for work motivation”, Studies in Higher Education, Vol. 39 No. 4, pp. 574-589.

Kallio, K.M., Kallio, T.J. and Grossi, G. (2017), “Performance measurement in universities: ambiguities in the use of quality versus quantity in performance indicators”, Public Money and Management, Vol. 37 No. 4, pp. 293-300.

Kallio, K.M., Kallio, T.J. and Tienari, J. (2016), “Ethos at stake. Performance management and academic work in universities”, Human Relations, Vol. 69 No. 3, pp. 685-709.

Kilfoyle, E. and Richardson, A.J. (2011), “Agency and structure in budgeting: thesis, antithesis and synthesis”, Critical Perspectives on Accounting, Vol. 22 No. 2, pp. 183-199.

Kitchener, M. (2002), “Mobilizing the logic of managerialism in professional fields: the case of academic health Centre mergers”, Organization Studies, Vol. 23 No. 3, pp. 391-420.

Kurunmäki, L. (2004), “A hybrid profession–the acquisition of management accounting expertise by medical professionals”, Accounting, Organizations and Society, Vol. 29 Nos 3/4, pp. 327 -347.

Leech, N.L., Haug, C.A., Iceman-Sands, D. and Moryarty, J. (2015), “Change in classification level and the effects on research productivity and merit scores for faculty in a school of education”, Studies in Higher Education, Vol. 40 No. 6, pp. 1030-1045.

Lewis, J.M. (2014), “Research productivity and research system attitudes”, Public Money & Management, Vol. 34 No. 6, pp. 417-424.

Lounsbury, M. (2007), “A tale of two cities: competing logics and practice variation in the professionalizing of mutual funds”, Academy of Management Journal, Vol. 50 No. 2, pp. 289-307.

Macdonald, S. and Kam, J. (2007), “Ring a ring o’ roses: quality journals and gamesmanship in management studies”, Journal of Management Studies, Vol. 44 No. 4, pp. 640-655.

Martin, B. and Whitley, R. (2010), “The UK research assessment exercise: a case of regulatory capture?”, in Whitley, R., Gläser, J. and Engwall, L. (Eds), Reconfiguring Knowledge Production: changing Authority Relationships in the Sciences and Their Consequences for Intellectual Innovation, Oxford University Press, Oxford, pp. 51-80.

Martin, B.R. (2001), “The research excellence framework and the ‘impact agenda’: are we creating a Frankenstein monster?”, Research Evaluation, Vol. 20 No. 3, pp. 247-254.

Martin-Sardesai, A. and Guthrie, J. (2018), “Human Capital loss in an academic performance measurement system”, Journal of Intellectual Capital, Vol. 19 No. 1, pp. 53-70, doi: 10.1108/JIC-06-2017-0085.

Melo, I., Sarrico, C.S. and Radnor, Z. (2010), “The influence of performance management systems on key actors in universities. The case of an English university”, Public Management Review, Vol. 12 No. 2, pp. 233-254.

Merton, R.K. (1973), The Sociology of Science: Theoretical and Empirical Investigations, University of Chicago Press, Chicago.

Meyer, J.W. and Rowan, B. (1977), “Institutionalized organizations: formal structure as myth and ceremony”, American Journal of Sociology, Vol. 83 No. 2, pp. 440-463.

Modell, S. (2001), “Performance measurement and institutional processes: a study of managerial responses to public sector reform”, Management Accounting Research, Vol. 12 No. 4, pp. 437-464.

Modell, S. (2003), “Goals versus institutions: the development of performance measurement in the Swedish university sector”, Management Accounting Research, Vol. 14 No. 4, pp. 333-359.

Moll, J. and Hoque, Z. (2011), “Budgeting for legitimacy: the case of an Australian university”, Accounting Organizations and Society, Vol. 36 No. 2, pp. 86-101.

Morrissey, J. (2013), “Governing the academic subject: foucault, governmentality and the performing university”, Oxford Review of Education, Vol. 39 No. 6, pp. 797-810.

Narayan, A.K., Northcott, D. and Parker, L. (2017), “Managing the accountability-autonomy of universities”, Financial Accountability and Management, Vol. 33 No. 4, pp. 335-355.

Neumann, R. and Guthrie, J. (2002), “The corporatization of research in Australian higher education”, Critical Perspectives on Accounting, Vol. 13 Nos 5/6, pp. 721-741.

Oliver, C. (1991), “Strategic responses to institutional processes”, The Academy of Management Review, Vol. 16 No. 1, pp. 145-179.

Opstrup, N. (2017), “When and why do university managers use publication incentive payments”, Journal of Higher Education Policy and Management, Vol. 39 No. 5, pp. 524-539.

Organisation for Economic Cooperation and Development (2005), University Research Management: Developing Research in New Institutions, OECD, Paris.

Orton, J.D. and Weick, K.E. (1990), “Loosely coupled systems: a reconceptualization”, Academy of Management Review, Vol. 15 No. 2, pp. 203-223.

Parker, L. (2002), “It’s been a pleasure doing business with you: a strategic analysis and critique of university management”, Critical Perspectives on Accounting, Vol. 13 Nos 5/6, pp. 603-619.

Parker, L. (2011), “University corporation: driving redefinition”, Critical Perspectives on Accounting, Vol. 22 No. 4, pp. 434-450.

Parker, L. (2012), “From privatised to hybrid corporatised higher education: a global financial management discourse”, Financial Accountability and Management, Vol. 28 No. 3, pp. 247-268.

Parker, L. and Guthrie, J. (2005), “Welcome to “the rough and tumble”: managing accounting research in a corporatised university world”, Accounting, Auditing and Accountability Journal, Vol. 18 No. 1, pp. 5-13.

Parker, M. and Jary, D. (1995), “The McUniversity: organization, management and academic subjectivity”, Organization, Vol. 2 No. 2, pp. 319-338.

Parker, L., Guthrie, J. and Gray, R. (1998), “Accounting and management research: passwords from the gatekeepers”, Accounting, Auditing and Accountability Journal, Vol. 11 No. 4, pp. 371-402.

Pop-Vasileva, A., Baird, K. and Blair, B. (2011), “University corporatisation. The effect on academic work-related attitudes”, Accounting, Auditing and Accountability Journal, Vol. 24 No. 4, pp. 408-439.

Prichard, C. and Willmott, H. (1997), “Just how managed is the McUniversity?”, Organization Studies, Vol. 18 No. 2, pp. 287-316.

Qu, S.Q. and Dumay, J. (2011), “The qualitative research interview”, Qualitative Research in Accounting and Management, Vol. 8 No. 3, pp. 238-264.

Reale, E. and Seeber, M. (2011), “Organisation response to institutional pressures in higher education: the important role of the disciplines”, Higher Education, Vol. 61 No. 1, pp. 1-22.

Rebora, G. and Turri, M. (2013), “The UK and Italian research assessment exercises face to face”, Research Policy, Vol. 42 No. 9, pp. 1657-1666.

Rüegg, W. (2009), A History of the University in Europe, Vol. 1, Cambridge University Press, Cambridge.

Saravanamuthu, K. and Tinker, T. (2002), “The university in the new corporate world”, Critical Perspectives on Accounting, Vol. 13 Nos 5/6, pp. 545-554.

Shils, E. (1983), The Academic Ethic, University of Chicago Press, Chicago.

Stake, R.E. (1995), The Art of Case Study Research, Sage, Thousand Oaks CA.

Strauss, A.L. and Corbin, J.M. (1998), “Basics of qualitative research”, Techniques and Procedures for Developing Grounded Theory, 2nd ed., Sage, London.

Sutton, N.C. and Brown, D.A. (2016), “The illusion of no control: management control systems facilitating autonomous motivation in university research”, Accounting and Finance, Vol. 56 No. 2, pp. 577-604.

Taylor, T., Gough, J., Bundrock, V. and Winter, R. (1998), “A bleak outlook: academic staff perceptions of changes in core activities in Australian higher education, 1991-96”, Studies in Higher Education, Vol. 23 No. 3, pp. 255-268.

Ter Bogt, H. and Scapens, R. (2012), “Performance management at universities”, European Accounting Review, Vol. 21 No. 3, pp. 451-497.

Thornton, P.H. and Ocasio, W. (2008), “Institutional logics”, in Greenwood, R., Oliver, C., Suddaby, R. and Sahlin-Andersson, K. (Eds), The Sage Handbook of Organizational Institutionalism, Sage, London, pp. 99-129.

Thorsen, E.J. (1996), “Stress in academe: what bother professors?”, Higher Education, Vol. 31 No. 4, pp. 471-489.

Tolbert, P.S. and Zucker, L.G. (1983), “Institutional sources of change in the formal structure of organizations: the diffusion of civil service reform 1880-1935”, Administrative Science Quarterly, Vol. 30, pp. 22-39.

Tucker, P. (2016), “Reputation and accountability: incentives and reputation as a harness”, Public Administration Review, Vol. 77 No. 1, pp. 101-102.

Tytherleigh, M.Y., Webb, C., Cooper, C.L. and Ricketts, C. (2005), “Occupational stress in UK higher education institutions: a comparative study of all staff categories”, Higher Education Research and Development, Vol. 24, pp. 41-61.

van Helden, J. and Reichard, C. (2019), “Making sense of the users of public sector accounting information and their needs”, Journal of Public Budgeting, Accounting and Financial Management, Vol. 31 No. 4, pp. 478-495, doi: 10.1108/JPBAFM-10-2018-0124.

Wedlin, L. (2008), “University marketization: the process and its limits”, The University in the Market, Vol. 84, pp. 143-153.

Wheeldon, J. and Ahlberg, M.K. (2012), Visualizing Social Science Research: maps, Methods, and Meaning, Sage, Thousand Oaks CA.

Winter, R., Taylor, T. and Sarros, J. (2000), “Trouble at mill: quality of academic worklife issues within a comprehensive Australian university”, Studies in Higher Education, Vol. 25 No. 3, pp. 279-294.

Yin, R.K. (2014), Case Study Research. Design and Methods, 5th ed., Sage, Thousand Oaks CA.

Acknowledgements

The authors are grateful to the two anonymous reviewers for their thoughts and constructive feedback on our manuscript. We want to thank Bligh Grant of the University of Technology Sydney for his valuable comments on an earlier version of the article. Any remaining shortcomings are on our own responsibility.

Corresponding author

Enrico Guarini can be contacted at: enrico.guarini@unimib.it

Related articles