1 Introduction
As businesses adopt Artificial Intelligence (AI), they are faced with new value propositions, but they also have to deal with new challenges, such as reducing the gap between intent and action(Amershi et al.,
2019; Enholm et al.,
2021; Mishra & Pani,
2020). Artificial intelligence has been perceived as a tool with which we can layer many different functions or as a solution to problems that are beyond the ability of traditional applications to solve. (Smuha,
2019). In order to gain a competitive advantage over their competitors (Raisch & Krakowski,
2021), businesses have implemented and deployed AI solutions to automate their processes, increase efficiency and reduce costs (Frank et al.,
2019; Gregory et al.,
2020). To achieve these goals, AI governance is essential. According to Butcher and Beridze (
2019), AI governance “can be characterized as a variety of tools, solutions, and levers that influence AI development and applications”. Yet, further research is needed to better determine how AI Governance can be introduced into a company and whether AI governance can assist a company in achieving its objectives.
While AI has the potential to generate business value in terms of performance, productivity and effectiveness, it is not autonomous, as it works in concert with human capabilities (Zhang et al.,
2021). Consequently, organizational capabilities are the results of combining and deploying multiple complementary resources within a firm to achieve competitive advantage (Mikalef & Gupta,
2021). When a firm optimizes its firm-level resources and adopts AI technological innovations, it can enhance its transformed projects' business value which drives business value and impacts performance (Wamba-Taguimdje et al.,
2020). Simultaneously, the AI algorithms can be considered performative in the sense that they assist in decision-making, the extent to which their use can form organizational processes, or even take autonomous decisions (Faraj et al.,
2018; Grønsund & Aanestad,
2020) that leads to new organization capabilities through AI. The use of AI, for instance, could create more substantial customer acquisition or higher customer lifetime value and lower operating costs or reduce credit risk.
The main goal of this work is to analyze AI governance when designing and implementing AI applications in order to achieve organizational goals. In particular, this study examines how AI Governance helps top-level managers achieve their goals by introducing robust systems that automate processes and enhancing tasks that traditionally were done by intuition or simple data analysis without negatively impacting employees. The main challenge for adopting AI in organizational operations is that AI technologies vary in scope and complexity, hindering familiarity, especially for non-technical employees (Holmstrom,
2021). Hence, it is crucial to define actions for overcoming barriers and challenges (technical and non-technical) to align AI applications to the organization’s objectives. As an example, employees might resist new technologies due to fears of being replaced by AI. Based on the results, companies will be able to gain a better understanding of how AI technologies are used, identifying focal points and mechanisms of value generation (e.g., augmentation or automation of decision-making or processes) and what challenges AI technologies present to organizations. Hence, we argue that AI value realization is not yet fully understood and called for and specific governance practices may help in doing. This study, therefore, builds on the following research questions:
To answer the research question, we collected data through a multi-case study, conducting interviews with multiple respondents within three companies in the energy sector. The interview questions focused on methodologies companies currently use, mechanisms and processes used in AI application development, the collection of data, and the consequences of AI application in decision making (AI risk). During this multi-case study, employees from various departments were interviewed, primarily the business department and the IT department since these two departments play a crucial role when developing an AI application. We also built on secondary data sources, such as reports and internal documents, which help to explore AI governance dimensions and practices as well as compare, triangulate and verify results. Among the outcomes of the study, AI was found to be most helpful for (1) reducing maintenance costs, (2) increasing flexibility and robustness of the development process, (3) improving confidence in the results and final products, and (4) gaining a competitive edge over the competition. Lastly, we proposed a model where we discussed challenges, recommended actions, and desired outcomes.
The rest of the paper is structured as follows. The subsequent section presents the background of this study and the relevant work in the domains of technology governance, and then specifically focuses on AI governance practices. Section
3 details the methodology that is applied for gathering and analyzing the data. In Sect.
4, we present each case separately followed by a cross-case analysis. The paper concludes with a discussion of the findings and limitations in Sect.
5, where we interpret and analyze the data.
3 Methodology
AI Governance in both the public sector and private sector is a set of practices that still have not been consolidated. The inadequate empirical data on mechanisms and procedures that firms deploy led us to engage this research using an exploratory, comparative case study approach that boosts generalizability while at the same time giving room for extending theory via cross-case analyses (Ramesh et al.,
2017). As AI will receive more attention in the following years because of the numerous challenges it poses, we sought revelatory cases that throw light on the phenomenon for the purpose of gaining a better understanding of it (Lewis et al.,
2011). In addition, there is no established framework or theoretical model that is commonly accepted by the industry and describes in detail the overall governance firms should adopt. For carrying out our multiple case studies, we followed established guidelines for case study research as illustrated by Baskarada (
2014), Stewart (
2012) and Eisenhardt (
1989). Also, we make use of the Information value chain schema to facilitate the interplay between people, processes and technologies over the information value chain, as proposed by Abbasi et al. (
2016).
Trustworthiness in the evaluation process and the findings themselves were of the utmost importance; thus, we enhanced the research methodology by strengthening credibility, dependability, transferability and confirmability (Korstjens & Moser,
2018; Sikolia et al.,
2013). To ensure validity in our findings, we used triangulation across multiple sources and methods through the convergence of information. In terms of transferability, the firms have common traits and operations, but they have some key differences in their business strategy. Dependability was achieved by being consistent in the analysis process and being in line with the accepted standards. Finally, confirmability was achieved by conducting interviews with different employees in the same firm who have key positions and belong to the same or different departments. What is more, data were analyzed and coded independently by three authors bringing various insights and points of view so that the authors could identify similarities and differences in their results, creating a comprehensible and coherent framework. Hence, in order to develop a theory based on empirical data, it was necessary to establish three iterations of data analysis.
3.1 Case Selection
The selection process of the cases was conducted based on the common characteristics in respect to industry, use of AI systems, size of development teams and cultural environment. All firms operate in the same industry and have similar capabilities in terms of collecting, analyzing and interpreting data for making business decisions. The most common perspective among the selected firms is that AI must be developed, expanded and adopted in the following years as it will be crucial for gaining or maintaining their competitive advantage over rivals or new companies entering the frame and seeking a piece of the pie. Also, the nature of AI projects undertaken by firms indicates that they face similar challenges, so they require similar solutions. Comparing the selected companies is fair because (1) they are all allocated in Norway, (2) they have similar AI teams in terms of size and experience, although the size of the companies ranges, and (3) their cultural differences are limited. Therefore, choosing these three firms from the industry allows us to compare the cases for commonalities and key differences and spot how AI Governance has been implemented. Also, a generalized and standardized framework would assist companies and the state in adopting AI and planning ahead for the resources, infrastructure and necessary processes that are required. In Table
1, the cases are presented with an overview of their size, revenue and AI strategy that they follow or plan to follow in the upcoming years.
Table 1
Overview of companies
Country | Norway | Norway | Norway |
Sector | Energy | Energy | Energy |
Employees | 200 | 530 | 100 |
Turnover 2020 | 180 million dollars | 260 million dollars | 23 million dollars |
AI Vision | Use AI to become one of the top players in the market | Use AI to increase flexibility and business capabilities | Create AI products that are customer oriented and boosts customer value |
AI Technologies | Both cloud and local ML pipelines combined with intelligence dashboards – Python, Grafana | ML pipelines combined with intelligence dashboards – Python, Grafana, Power BI | ML pipelines combined with intelligence dashboards – Python, Grafana, Tableau |
3.2 Data Collection
Conducting interviews is an excellent mechanism for gathering information, especially when the researcher does not have a priori guiding theory or assumptions (Qu & Dumay,
2011). Also, interviews can be used to refine a theory or understand a phenomenon (Tallon et al.,
2013a,
b). As shown in the background section, previous researchers decompose information governance into a range of structural, procedural, and relational practices, which could be used as part of our baseline to understand how to build practices that enable AI Governance. A case study approach is chosen because it allows for in-depth analysis using interviews as generating method for collecting data. By exploring these data, new knowledge can be generated allowing for meaningful insights that explain similar situations (Oates,
2005). Also, the research is qualitative as it involves the use of qualitative data, which can be used to understand and explain the research question (Michael,
1997), as it involves the use of experiences, beliefs, and attitudes of the key respondents through the semi-structured interviews (Wynn & Williams,
2012).
Every case was initiated by contacting the human resources department or those who should have been able to handle this type of communication, for instance, managers. A brief introduction was sent via email to establish an understanding of the purpose of this research project and in some cases, quick telephone calls where necessary in order to provide some extra information. We described ideal candidates for interview as employees that (1) have a key position in the firm, for example, managers and leading developers, (2) have a good understanding of AI technologies and (3) have contributed to the overall development of AI either through their domain knowledge or their software development skills. A total of 15 individuals were interviewed, including both domain and technical experts who have worked in their current positions for at least one year, but have relative experience of at least five years. This means they are experienced, and they gained a solid understanding of AI development over time. Furthermore, participants shared how they understand specific issues, according to their own thoughts and in their own words (Pessoa et al.,
2019) as members of either the business department or the IT department, as input from both departments is needed in order to understand how AI governance is designed. Table
2 shows information about the interviews, such as the firm candidates’ number and their current position.
Table 2
Responders’ role and length of interviews
A | 1 | Chief AI officer | 3 | 90 min |
2 | AI Software Developer | 3 | 55 min |
3 | Machine Learning Engineer | 3 | 45 min |
4 | AI Software Developer | 3 | 43 min |
5 | Project Manager | 4 | 49 min |
6 | Machine Learning Engineer | 3 | 35 min |
7 | Machine Learning Engineer | 3 | 45 min |
B | 1 | Data Analyst | 9 | 49 min |
2 | Head of AI department | 1 | 25 min |
3 | Head of Data Analytics department | 4.5 | 59 min |
4 | Digitalization Engineer | 10 | 55 min |
5 | Head of Digitalization department | 2 | 43 min |
C | 1 | Data Scientist | 2 | 65 min |
2 | Head of Analytics department | 3 | 60 min |
3 | Operation Manager | 3 | 60 min |
The interviews formed on open-ended questions that led to interesting conversations, where the interviewees had the opportunity to adopt their questions based on the answers or even ask questions that were not part of the interview guidelines. Before each interview, we explained to each interviewer individually what we hope to accomplish through the interviews and what we expect to be the outcome of our research, while at the same time we encourage them to add anything they believe is relevant or that we missed during the interviews. The questions were split into three categories:
1.
The business value and the organizational context where we try to see how AI grew over time.
2.
The data management where the interviewees were explaining how their firm deals with data services and governance practices.
3.
The control and technical aspects focused on control processes and mechanisms that ensured AI systems were acting upon set goals.
Each interview lasted approximately 55 min on average, with the range being between 25 and 90 min via Zoom, which was used to record each session and then the audio was transcribed using Otter AI. The audio files were transcribed in a verbatim way so that the text remains identical to the audio, meaning that all raw data are transparent, and the findings and results could be reproduced and tracked down rigorously. As part of the process, we had to go through the text and the audio to make sure everything was looking good since we wanted our text to match the audio and the only way to guarantee that was by checking all results manually.
In addition, we used related data publicly available on the company’s site (e.g., annual reports, vision and firm structure) because we consider them to have merit in our research. These documents served both as validations for our findings as well as information that we did not have prior to the interviews, assisting us to obtain a better understanding of the vision, objectives and regulations of each company.
3.3 Data Analysis and Theory Building
A narrative analysis is followed for analyzing the content from the interviews as the stories and experiences shared by employees are used to answer the research questions.
As a first step, we went through the interview transcripts and commented on our initial thoughts by writing memos. Although memos are usually used at the beginning of a text analysis, we continued to use them for updating our thoughts and interpretations or even adding new ideas. The generated transcripts were imported into the software NVivo, where open and axial coding were applied, and categories were formed based on the notation process (coding). NVivo has an add-on module called “NVivo Collaboration Cloud” allowing teams to collaborate by storing projects securely in the cloud. Two of the writers had an “administrator” role while the rest had a “workspace owner” role, so it was convenient to store, upload and update our project files. Each writer was responsible for updating his content to the cloud and the administrators reviewed the changes, but not the content, in case something went entirely wrong; for example, unintentionally deletion of a file. If the administrators were satisfied, then a merge was performed and everybody could work on the updated version of the project. Backup files were part of the process in case we lost our work or needed to go back to a previous version, so at the end of each week, a backup process was in place and the files were stored independently of NVivo.
In the first iteration, we tried to identify all the concepts related to AI Governance and the adopted practices by the firms. Initially, there were 200 descriptive codes, such as, “working with domain experts” and “domain experts lead projects” but after an iteration the number was reduced to 95, since many codes were merged into a more appropriate coding name such as “domain experts take lead of a project to ensure quality of the final product”, where the combined codes become abstract.
The next logical step was to apply axial coding, where the main nodes that have been coded were procedural, relational, structural, AI development and AI challenges. In addition, comments and observations from different transcripts were combined to identify commonalities and patterns in the processes used when creating and deploying AI systems that assist firms minimize AI risks. Grouping the comments and observations, known as axial coding (Charmaz,
2014), allowed for better interpretations since the employees could refer to the same concept with similar terminology, depending based on their technical skills, knowledge, experience and position in the firm. In order to obtain a high level of confidence, researchers validated findings by examining reports, public information and presentations related to this research and focused on the AI aspects (Table
3).
Table 3
Nodes and possible items under each node
Procedural | Practices associated with data migration, system messages, documentation and processes for expansion, dynamic model selection, pipeline evaluation, human and AI interaction, data quality sources |
Relational | Practices that deal with employees and communicating goals, domain experts, AI education for employees |
Structural | Practices associated with IT, optimization and automation, AI automation, ML pipelines, data access |
AI culture | Understanding of AI capabilities, AI-phobia, Trust issues against AI |
AI architecture Legal regulations Domain challenges Adoption problems Competitive Advantage Flexibility Cost maintenance Scaling up Superior AI results | Development best practices, cloud infrastructure, unified tools GDPR, legal constrains of AI use Data challenges, domain knowledge, external challenges Fear of losing position Developing unique AI strategy, keep AI knowledge in house Cloud services boost flexibility Minimize costs from various operations AI assists in scaling up without needing more resources Internal AI teams can give high value through solutions that are targeted in a specific problem and not generalized |
Once all cases had been adequately analyzed, and the researchers had reached consensus, a cross-case analysis was performed. In the course of the discussion, we identified a number of patterns that were either similar or different and explored the reasons behind them through open discussion, trying to establish consistency and cohesion, arguing which interpretation seems most reasonable to our goal and how AI Governance is created among these cases and which practices companies should adopt or introduce.
5 Discussion
In this study, we set out to explore the underlying activities that comprise an organization’s AI governance. Specifically, we built on the prior distinction between structural, relational, and procedural dimensions of governance in order to understand how organizations are planning around their AI deployments. Through a multi-case study of three organizations that have been using AI for several years, we conducted a series of interviews with key respondents and identified a set of activities that were relevant under each of the three dimensions, as well as challenges they faced during deployments of AI and how they managed to overcome them. Our analysis essentially points out the various obstacles that AI governance is oriented to overcoming, and the mechanisms employed to operationalize them.
Specifically, we find that the obstacles that are identified during the process of deploying AI are observable at different phases and concern different job roles. When it comes to difficult management responsibilities that a business owner must do, AI solutions can always provide a variety of responses and probabilities for each of these alternatives. However, AI lacks the ability to make decisions in specific contexts. To make the ultimate decision, a business owner or manager must employ intuition to reconcile the choices provided by AI (Kar & Kushwaha,
2021). In addition, they span various levels of analysis, from the personal, such as fear of AI and reluctance of employees to adopt it, to organizational-level ones, such as organizational directives on how to comply with laws and regulations. What is more, the study reveals not only that AI governance is a multi-faceted issue for organizations but that it spans multiple levels, therefore requiring a structured approach when it is deployed. In addition, different concerns emerge at different phases of AI projects, so AI governance also encapsulates a temporal angle in its formation and deployment.
The significance of governing AI can be critical in attaining digital innovation. The firms we looked at were leveraging AI to help them reinvent their operations. Instead of having an information collection approach, these firms followed an information analysis approach. Information analysis refers to the opportunity of developing unbiased approaches for evidence-based data analysis (Trocin et al.,
2021a), where AI can foster digital process and service innovation as companies did in this study. Also, AI has the potential to foster a digital innovation process by developing new and evidence-based approaches for data collection (Mariani & Nambisan,
2021). First, it enables organizations to modify particular parameters to appeal to a wider audience when content is released online, and second, it allows them to gather online behavioral data and store it for a set period of time (e.g. one year) in accordance with GDPR regulations (Trocin et al.,
2021a). It is worth mention that emotional intelligence is not part of these systems although understanding how people deal with emotional challenges is crucial for AI systems to emulate human reasoning (Luong et al.,
2021). Finally, the COVID-19 pandemic has introduced new challenges and opportunities for digital transformation and innovation. For example, the United Kingdom intends to employ health information technology and execute proposals for a national learning health and care system as a result of a serious public health shock. Hence, each UK country's digital health and care strategy should be re-evaluated in light of the pandemic's lessons (Sheikh et al.,
2021).
5.1 Research Implications
This study contributes to IS literature. Despite the considerable debate in the scientific community about what is considered AI and how companies should incorporate AI in their everyday operations, we tried to understand the processes firms use to govern AI. However, not all companies have managed to build AI solutions that have had significant organizational effects and resulted in added business value. In this article, it is argued that although it is important to adopt AI, it is equally vital to create the necessary processes and mechanisms for developing and aligning AI applications with the requirements of the business environment. One of the main challenges we identify is that AI governance requires continuous adaptation and modification as new data emerges or conditions change, for instance how employees perceive AI. Thus, there is a form of ephemerality which places an increased focus on establishing processes, mechanisms, and structures to ensure that it is functioning as required and that it aligns well with the goals of the organization.
Furthermore, there is a multitude of angles that a firm can approach AI governance; for instance, companies in this study tried to create ML pipelines and interactive dashboards, but not all of them had a real focus on explainability of the results since they are still in early stages and focus on parts that they believe are more urgent. In the industry there is a recent article by Microsoft, which focuses primarily on the technical aspects of workflow implementation, outlining the key phases in the lifecycle of machine learning applications (Amershi et al.,
2019). Yet, this research concentrates on the development challenges and the practical solutions a firm could follow to build an AI through solid and effective organizational practices. In this sense, AI governance in this article is not seen as a process but as a set of important aspects that need to be considered when designing and deploying practices and mechanisms, in order to ensure that the main challenges are overcome successfully and that AI applications are operating as planned. Our proposed model suggests that although there are inhibitors and barriers and despite the different ways of approaching AI governance, it offers positive outcomes, if best practices are followed, and this study identified specific procedural, structural and relational components that are necessary for achieving this.
Our exploratory work opens up a discussion about what AI governance comprises of, and how it can be dimensionilized. Furthermore, it explores the link between the challenges such governance practices help overcome, and the actors and practices they involve. This stream of research is particularly important in the value-generation of AI-based applications, as it paints a more detailed about how relative resources are leveraged in the quest for business value (Mikalef et al.,
2019). In addition, the work sheds some light on the process-view of AI deployments by opening up the dialogue about the different phases of AI deployments and the unique challenges faced within each of these.
5.2 Practical Implications
Based on the findings, a firm needs to incorporate new procedures when adopting AI in order to maintain an advantage over the competition and boost efficiency. A unified system is required for building AI pipelines, which is consistent with the tools that developers use. Hence, the system will be more robust as it will be easier to maintain and improve different components of the system. In addition, managers should create procedures that employees are aware of and follow and give clear guidelines; otherwise, time and resources might be wasted, which could be invested in other projects that would add more business value.
Firms should use AI for automating tasks that are repetitive, which is appreciated by employees since they do not want to do monotonous work, but at the same time managers should have extended conversations with employees of other departments ensuring them that AI will not replace them (AI education). This could be crucial for the company’s internal stability as people might lose trust in the leadership, they might leave the company taking their expertise with them or resist using new technologies and try to undermine the value of AI.
Lastly, firms can use dashboards as an effective way to allow communication between human and machine. Dashboards are a great information management tool that is used to track KPIs, metrics, and other essential data points relevant to a business. That way the black-box nature of models and AI in general can be less problematic because the use of data visualizations simplifies complex data sets and provides end-users useful information that can affect business performance. In other words, humans will be able to evaluate results and detect any outliers or anomalies in processed data. This in turn facilitates greater transparency and a more direct way of revising the models used to analyze data.
5.3 Limitations and Future Research
In the current work, we investigate how to govern AI, which practices should be adopted and how to minimize AI risks. However, there are certain limitations that characterize this research. First, the data are collected through interviews with companies that do not require extensive use of sensitive data; thus, there might be bias in our data or provide an incomplete picture of the entire challenges around relevant practices. Second, while we conducted several interviews with key employees within the organizations, our data collection was based on a snapshot in time and may not accurately reflect the complete breadth of practices. Lastly, all cases are from the same sector. Hence, generalizability could be an issue that should be taken into consideration.
As future research, it would be interesting to gather more empirical data through interviews, from firms that belong to different sectors, and theorize the notion of AI governance from a positivist perspective, which could be tested with empirical data on the antecedents and their effects. It would also be beneficial for the field to know which resources firms deploy most in order to achieve their organizational goals and how they govern these resources to boost their performance, and how AI governance practices impact specific types of resources.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.