Skip to main content
Erschienen in: Business & Information Systems Engineering 6/2023

Open Access 24.05.2023 | Catchword

Algorithmic Accountability

verfasst von: David Horneber, Sven Laumer

Erschienen in: Business & Information Systems Engineering | Ausgabe 6/2023

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …
Hinweise
Accepted after two revisions by Susanne Strahringer.

1 Introduction

Advancements in technology have led to the widespread adoption of machine learning (ML) algorithms in almost all areas of society (e.g., shaping customer experiences through recommendations, and supporting organizational activities through automating tasks). Despite their potential benefits (e.g., personalizing experiences, enhancing productivity, improving decision-making) various examples have shown that systems based on such algorithms can lead to negative consequences. For instance, use cases in healthcare (Obermeyer et al. 2019) or finance (Blattner and Nelson 2021), have demonstrated how ML systems can undermine fairness and discriminate against minorities by generating statistical or reproducing societal biases (Mitchell et al. 2021). Other examples in marketing have illustrated how individuals’ privacy can be compromised by inferring intimidating knowledge about individuals and using it for targeted advertising (Hill 2012; Mattu and Hill 2022). With the development from simple rule-based to complex probabilistic algorithms, the potential for harmful effects gets even more pressing, as their operation is increasingly opaque and more and more automated. While it has been already widely investigated how such issues can be addressed (e.g., Liu et al. 2022; Mehrabi et al. 2021), many organizations still fail to mitigate the often unintended, negative outcomes of the ML systems they are developing, providing, and using. Thus, academics (e.g., Novelli et al. 2023; Wieringa 2020) and policymakers (e.g., Mökander et al. 2022; Smuha 2021) have put an increasing emphasis on the topic of algorithmic accountability to ensure the ethical development and use of such systems (Donia 2022).
Although algorithmic accountability is mentioned as an important principle in almost all guidelines and regulatory documents for the ethical development of ML systems, there is still considerable uncertainty on what constitutes it and how it can be accomplished (Jobin et al. 2019). This is due to the fact, that algorithmic accountability is an ambiguous concept that deals with many different questions (Bovens 2010; Poechhacker and Kacianka 2021; Wieringa 2020). Many of these questions arise from the existence of different accountability types. To shed light on these accountability types and to bring together the different questions around algorithmic accountability, our article aims to introduce the topic to the BISE community. Future work on algorithmic accountability promises to build the foundation for organizations and developers to effectively implement and manage accountability measures and to inform policy-makers on how to appropriately regulate ML systems. Furthermore, it could provide insights into how individuals deal with accountability-related concerns and how it affects their interaction with ML systems.

2 Conceptual Foundations

Algorithmic accountability focuses on the question of who takes the obligation to justify the design, use, and outcome of machine learning systems and who assumes responsibility for the negative consequences of these systems (Bovens 2007; Wieringa 2020). By assigning responsibility and demanding justifications one could oversee and restrain the behaviors of others. Hence, demanding algorithmic accountability can be understood as a governance function to either proactively avoid the negative impacts of the provision and use of ML systems or to reactively sanction accountable actors, if there have been any adverse effects caused by the systems (Novelli et al. 2023). While previous research has extensively studied the requirements that are necessary to achieve the ethical development and use of ML systems (e.g., fairness (Feuerriegel et al. 2020), interpretability (Lipton 2018), privacy (Liu et al. 2022), robustness (Tocchetti et al. 2022)), algorithmic accountability is focused on the demands that shape these requirements and the governance measures that responsible actors can take to fulfill them.
Answering the question of who has to take responsibility for the actions and outcomes of ML systems, the term algorithmic accountability has the potential to mislead. ML systems as technical artifacts cannot be held accountable (Bryson et al. 2017; Martin 2022). However, organizations and users that develop or use these systems could (Martin 2022). As Martin (2019) argued, ML systems are not value-neutral, instead, they integrate the norms and intentions of those developing, operating, and using them. While organizations can design ML systems in a way that gives full control to the users to put the responsibility on them, typically, firms remain in control of the systems they provide. Thus, the normative obligation to be accountable for these systems resides with them and their employees developing and operating ML systems. Concerning the motivation of organizations to act accountable, the accountability concept implies that stakeholders (e.g., regulators) who demand accountability, can put consequences (e.g., economic damage) on accountable actors if they fail to provide justifications and do not assume responsibility (Bovens 2007; Wieringa 2020). Therefore, if stakeholders can successfully enforce accountability demands (e.g., through regulations), firms should have an incentive to avoid the negative effects of their systems and adequately justify their design and use.
Focusing on the question of who are the stakeholders that demand accountability there are typically two types of external stakeholders (Bovens 2007; Wieringa 2020). First, there are users and other individuals that are affected by ML systems (e.g., applicants). These can execute a form of social accountability (e.g., public protest), individually or collectively, to express their concerns and demand justifications and consequences. Second, there are regulators. These are administrative or legal institutions that can implement institutional control mechanisms (e.g., legislation) to require explanations and enforce sanctions.
Following these accountability demands and mechanisms, typically requirements are set that firms have to fulfill to avoid negative consequences. Organizational and technical measures can be used to accomplish this. From an organizational perspective, companies can implement different governance instruments (e.g., establishing and enforcing development guidelines) to hold their employees accountable who are dealing with ML systems (Donia 2022; Schneider et al. 2022). As such, algorithmic accountability can be part of a corporate digital responsibility strategy that is focused on taking responsibility for the ML systems that are developed or used in an organization (Lobschat et al. 2021; Mueller 2022). In addition, developers need to consider certain technical requirements and properties of ML systems that need to be technically managed and addressed to work toward algorithmic accountability by design (i.e., the fulfillment of all requirements that are set by different accountability demands).
Figure 1 illustrates the different stakeholders and the external accountability demands (institutional and social accountability) and internal accountability measures (organizational and technical accountability) that exist around a ML system.
In Fig. 1, we distinguish between provider and operator organizations, since the firm that develops and deploys an ML system is not necessarily the same organization that runs and employs it. Similarly, there are typically different types of practitioners within an organization (e.g., developers, product owners, quality managers, specialists who use ML systems) who deal with ML systems and are in control of their design and use. Whenever this is the case, companies or practitioners among each other hold distributed accountabilities to be responsible for the consequences of the system they develop, operate, and employ. In the following sections, we discuss the different accountability demands and measures in detail.

2.1 Social Accountability

Machine learning systems pose several risks to users and other individuals who are affected by these systems. To enable this group to effectively demand accountability whenever they have used or been influenced by an algorithmic decision is vital to ensure that the power of these systems can be controlled. Furthermore, these stakeholders need to be enabled to proactively assess the actions and impact of ML systems.
Research has shown that higher perceptions of individuals that providers of ML systems have implemented accountability measures (perceived system accountability) can positively influence users’ trust and satisfaction (Shin 2021; Shin and Park 2019). In addition, it can strengthen the willingness of users to accept and follow algorithmic advice (Adam 2022). Perception of system accountability is thereby influenced by the interpretability of the systems (e.g., providing evidence that the system is regularly reviewed) (Adam 2022; Shin 2021). Corporations consequently risk losing trust and stimulating reputational concerns if they fail to provide explainable and transparent systems (Buhmann et al. 2020).
Users have different possibilities to claim and execute accountability based on the context in which they are using ML systems: consumer or professional. In the professional context, users can perform individual (e.g., noncooperation) and collective (e.g., striking) user resistance behaviors to create awareness of negative effects and attempt companies to adapt their business practices (Kellogg et al. 2020; Möhlmann et al. 2021). Consumers on the other hand can make use of different demand- (e.g., discontinuing ML services), rating- (e.g., negative ratings), or discourse-based (e.g., negative word-of-mouth) mechanisms to denounce misconduct and impose consequences on organizations (Grégoire and Fisher 2008; Labrecque et al. 2013). In both contexts, consumers and professionals can also express their concerns to policymakers to seek support and regulation from legal or administrative institutions. Journalists can support such efforts by providing algorithmic accountability reports in public outlets and creating and framing a public discourse about algorithmic issues (Diakopoulos 2015; Kellogg et al. 2020).
While initial results have shown that individuals can effectively trigger regulatory efforts or discipline organizations by such methods (e.g., Benson et al. 2020), several challenges can prevent consumers and professionals from enforcing accountability. For example, many collective approaches involve mobilizing a large number of users to accomplish an effect. This in turn requires that a considerable amount of individuals are affected and that they use the same set of standards to evaluate the conduct of the accountable actor. In addition, firms can take countermeasures (e.g., framing a public discourse) to mitigate the negative reactions of users. In the subsequent section, we explain the role of legal and administrative institutions to safeguard individuals and to support them to enforce their accountability demands.

2.2 Institutional Accountability

The call for regulation and governance of ML systems from legal institutions is becoming increasingly intense (Donia 2022; Mökander et al. 2022; Smuha 2021). While existing laws such as the GDPR regulate ML practices to some extent, it can be expected that in the future more extensive legislation will be implemented (Koniakou 2022). Hence, organizations need to be prepared for stronger liability requirements. The term liability is often mentioned alongside accountability and deals with justification when confronted with legal institutions and possible juridical sanctions (Slota et al. 2021). Legal accountability controls organizations’ ML activities and ensures that interventions can be enforced (Stahl 2021). For organizations, two goals are relevant when it comes to legal accountability. First, they need to be prepared to provide information about their ML systems whenever legal institutions request them. Second, legal compliance needs to be accomplished from a management as well as from a technical perspective. Establishing appropriate governance measures and considering legal requirements in the design of ML systems can be a means to accomplish these goals.
Besides legal accountability, firms can expect that the accountability demands raised by administrative institutions will also play a more important role in the future (Matus and Veale 2022). Private or public regulators such as auditors or certification bodies can implement different mechanisms to hold organizations accountable. The most frequently discussed is algorithm audits (e.g., Brown et al. 2021; Raji et al. 2022). Algorithm audits can be either focused on technical aspects such as reliability and robustness or on problematic behaviors and the societal consequences of ML systems (Raji et al. 2020). Auditing for societal effects, often some type of impact assessment is proposed to assess the potential negative outcomes of ML systems (Metcalf et al. 2021). Similar to algorithm audits, certifications can be used to assess the ML practices of an organization and to make them meet certain requirements (Matus and Veale 2022). Another frequently suggested accountability mechanism, that can be implemented via independent external stakeholders, is the collection and reporting of critical incidents (Brundage et al. 2020). Anonymized disclosure of unexpected algorithmic behaviors can help researchers and practitioners to identify shareable lessons to avoid such behaviors in the future. It also can support auditors to guide their inspections toward specific, repeatedly occurring problems (Raji et al. 2022).
While in the case of legal accountability, organizations are faced with solely negative outcomes if they fail to provide the right information and justifications, successfully fulfilling administrative accountability requirements could enable provider and operator organizations to achieve legitimacy and trust in their ML systems (Adam 2022; Kordzadeh and Ghasemaghaei 2022; Martin and Waldman 2022). Therefore, if implemented correctly, companies can have an incentive to voluntarily conduct third-party inspections and adhere to accountability requirements set by such stakeholders. Next, we elaborate on the measures that organizations can take to proactively counter accountability claims.

2.3 Organizational Accountability

Already several companies have taken steps to implement a form of organizational accountability. Organizational accountability can be understood as the internal governance measures of an organization to align their ML practices with the organizational values and external requirements (Mäntymäki et al. 2022). As a part of this, companies can hold their employees accountable who are dealing with ML systems to achieve internal control (Donia 2022). Implementing accountability among practitioners developing, operating, and managing ML systems, presents organizations with the challenge to monitor responsibilities across departments and roles (Feuerriegel et al. 2022). While the purpose of accountability demands by users and regulators is to proactively prevent as well as reactively sanction misbehaviors, the goal of organizational accountability is mainly proactive. Firms can implement internal accountability measures, to promote awareness of potential harms within their organization and continuously monitor and improve their ML systems to follow internal and external policies.
Currently, most organizations rely on a principle-based approach to govern their ML systems (Jobin et al. 2019; Mittelstadt 2019). This approach has often been criticized as insufficient, due to the issues arising when trying to translate ethical principles and guidelines into practice (Mittelstadt 2019). While often criticized, defining and implementing internal policies can be considered a strategic first step to achieving successful governance of ML systems (Schneider et al. 2022; Seppäla et al. 2021). Building upon these guidelines, companies can specify and apply standardized processes and rules to develop, test, deploy, and operate their ML systems (e.g., coding, documentation, testing, and architectural guidelines) (Schneider et al. 2022). These can then be used in internal audits to assess whether the algorithmic practices adhere to these regulations (Raji et al. 2020). Furthermore, internal impact assessments can be conducted to identify risks and manage them proactively. Similar to incident systems set up by external stakeholders, firms can implement such a system internally, to ensure that problems are resolved promptly and lessons learned can be drawn (Schneider et al. 2022). Establishing such accountability measures allows organizations to be aware of and mitigate the potential negative consequences of their ML systems. Moreover, it enables them to proactively create accountable ML systems and achieve legal compliance. Whenever corporations are faced with external accountability demands, providing information gathered through their internal governance measures supports them in fulfilling these demands.
In addition to the regulation-oriented approach, companies can also attempt to empower their employees through training or communication to address their responsibilities when working with ML systems (Ryan et al. 2022; Schneider et al. 2022). Making employees aware of their duty to act accountable is an essential first step to ensuring that ethical guidelines and procedures are translated into practice. Subsequently, we describe the technical requirements that developers face in the course of accountability demands.

2.4 Technical Accountability

An important technical requirement that needs to be considered by developers to work toward accountability by design is the interpretability of ML systems. The need for interpretability arises from the obligation to explain and justify the design and outcomes of ML systems (Wieringa 2020). What information is needed to be interpretable is always dependent on the stakeholder and context of accountability demands (Berente et al. 2021). In general, developers and providers should reflect on the following interpretability objectives to make their ML systems accountable: design transparency (Kroll 2021; Loi et al. 2021) and auditability (Felzmann et al. 2020; Kroll 2021). Design transparency requires developers and providers to disclose information about the design goals, input data, and mechanisms of construction and operation of their ML systems (Kroll 2021; Loi et al. 2021). Auditability enables third parties to probe, understand and review ML systems before deployment and during operation (Felzmann et al. 2020; Kroll 2021). Achieving these two interpretability goals should enable different stakeholders to assess and understand the actions and intentions of the systems.
Another technology-related factor that must be reflected by designers and developers is the ability to monitor and control ML systems. While it may not always be necessary and reasonable that humans oversee the actions and outcomes of a system, firms and practitioners must be aware that there are high-risk applications (e.g., medical treatment recommendation) that pose controllability requirements that must be met (Methnani et al. 2021). Accordingly, designers and developers need to carefully assess the impact and risk of their systems to decide on the right level of human oversight. Keeping humans in the loop is often seen as insurance for organizations to avoid negative consequences. However, companies need to ensure that individuals are informed of their role and are supported by the systems (e.g., through interpretability mechanisms) to act by the relevant accountability standards.
Finally, firms and developers should aim to technically address the well-known sources for negative outcomes of ML systems, such as robustness and safety, bias, and privacy. For each of these issues, researchers and practitioners have already started to propose technical measures to test and protect the algorithmic system against them (e.g., Liu et al. 2022; Mehrabi et al. 2021; Tocchetti et al. 2022). Developers should make use of these to mitigate the unintended detrimental effects of ML systems and to avoid unnecessary accountability demands.

3 Implications for BISE Research and Future Work

Despite its increasing relevance, the topic of algorithmic accountability has not received much attention in the BISE community so far. The different accountability demands and measures motivate several empirical and design-oriented research opportunities. Table 1 gives an overview of potential areas for future research organized around the different accountability types introduced in the previous sections.
Table 1
Overview of future research opportunities
Accountability type
Sample research questions
Social
What individual, organizational, and environmental characteristics shape users' perceptions of corporate/personal accountability when interacting with ML systems?
How do perceptions of corporate/personal accountability influence individual behavior when interacting with ML systems?
What procedures can individuals and society use to hold providers of ML systems accountable?
Institutional
What policies can effectively prevent the negative societal ramifications of ML systems?
How can algorithmic incidents be publicly tracked and reported?
How can policymakers incentivize the implementation of accountability measures?
Organizational
What governance measures can be effectively used by organizations to identify and mitigate the negative consequences of their ML systems?
How does the perceived accountability of practitioners dealing with algorithmic decisions influence their business activities?
What are the drivers and challenges for the adoption of accountability measures?
Technical
What are the design principles to develop accountable ML systems?
How to develop and provide interpretable ML systems that allow to pro- and reactively address accountability demands?
How does the design of ML systems influence corporate/personal accountability perceptions?
As explained in Sect. 2.1, social accountability is about users and individuals affected by ML systems holding the providers or operators of the system they use accountable. Thus, research in this area should focus on understanding how different perceptions of accountability arise and how they can influence the interaction and usage of ML systems. Thereby, different perceptions of accountability can be distinguished. Individuals can either perceive that the provider or operator organization is willing or unwilling to be accountable for the system they offer, or they can perceive that they are accountable for certain outcomes or actions when interacting with the system. In both cases, scholars can investigate what are the individual, organizational, and environmental characteristics that shape such perceptions. Furthermore, it can be examined how these perceptions influence diverse dependent variables such as algorithmic acceptance, intention to delegate tasks to an ML system, or organizational trust in the provider and operator. Additionally, further work is required that focuses on the strategies that consumers and professionals use to demand accountability. An important issue is thereby the effectiveness of different strategies and the challenges that consumers and professionals face when demanding accountability. Results in this domain can help to better understand how accountability claims are raised and determined by distinct individual and environmental factors. Moreover, it sheds light on the effects of algorithmic accountability on human behaviors.
Current legislation on ML systems is relatively weak (Mittelstadt 2019). BISE researchers can support regulators in specifying algorithmic policies in several ways. First, scholars can theorize about the long-term consequences of ML systems to advise legal and administrative authorities about upcoming implications. Additionally, it can be systematically examined, which values individuals hold that should guide algorithmic accountability policies (Mason 2021). Furthermore, empirical work can be conducted to analyze the effectiveness of future and existing regulations. Focusing on the impact of algorithmic regulations on organizations, future work can investigate how firms adopt certain legal and administrative requirements and how this affects the development and provision of ML systems. How practitioners act on regulations and inspections and if they see it as a threat or opportunity, are additional questions that can be studied. While policy-related research may not be the core of the BISE community, analyzing such issues from a sociotechnical lens can help to inform regulators and practitioners dealing with algorithmic accountability.
Managing algorithmic accountability within an organization presents the challenge of translating abstract ethical principles into actionable requirements and practical governance measures (Mäntymäki et al. 2022; Mittelstadt 2019). Scholars can support practitioners to tackle this challenge by examining how to successfully establish an ethical-aware culture within a company. Moreover, it can be studied which governance instruments enhance the perceptions of practitioners to be accountable for their work and how this translates to technical decisions and business activities. Taking away the skepticism of employees and enabling them with the necessary resources and capabilities to develop ethically-aligned ML systems, many knowledge- and awareness-related barriers to more accountable organizational practices can be removed (Tomilova 2021). Additionally, researchers can analyze which governance measures are most effective in mitigating risks and how they can create actual business value for corporations. Furthermore, the effectiveness of organizational response strategies to accountability claims can be studied to provide guidelines for practitioners on how they can mitigate the economic and reputational damage that often follows accountability demands. Considering the fact, that typically multiple actors are involved in creating and providing ML systems, future work can investigate how distributed accountabilities are organized between different actors and how this influences their perceived accountability. In this vein, it can be also analyzed how different roles (e.g., management vs. developer) are affected by and deal with (distributed) accountability demands. Generating insights on how firms can implement and manage accountability measures is crucial to support practitioners in managing the potential negative consequences of ML systems.
Although there can be many reasons for algorithmic accountability claims, the technical design is often one of the main factors contributing to the adverse outcomes of ML systems. Researchers have already started to theoretically discuss interpretability requirements that are necessary to develop accountable ML systems (e.g., Felzmann et al. 2020; Kroll 2021). However, it has not been defined and evaluated how they can be implemented to allow different stakeholders to assess the business practices of organizations and practitioners. Taking a design-oriented lens, scholars can build and assess explainability mechanisms and interaction interfaces that fulfill the interpretability needs of distinct institutional or social accountability demands. Focusing on the controllability of ML systems, scholars can examine and propose accountability design principles for different levels of autonomous systems. Moreover, additional work on the different sources of negative consequences of ML systems and the development of technical measures to address them is necessary. This can help to tackle the currently existing robustness, fairness, and privacy challenges. Another interesting area for future research can be to analyze the development practices around the implementation of interpretability and controllability methods. Understanding the challenges of the technical implementation can help to develop better tools and methods to work toward accountability by design.

4 Conclusion

Overall, it can be summarized that algorithmic accountability is much more than the mere question of who takes responsibility for the impacts of ML systems. Instead, it is about how institutions, organizations, and individuals can govern ML systems and how developers and provider of ML systems can fulfill their accountability obligations. While researchers have already started to propose measures and define requirements to achieve interpretable and controllable ML systems, their concrete implementation and the effects on the ecosystem around them have not gained much attention yet. By defining algorithmic accountability as a governance issue and introducing it from different perspectives we provide a foundation for future research on the topic.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Unsere Produktempfehlungen

WIRTSCHAFTSINFORMATIK

WI – WIRTSCHAFTSINFORMATIK – ist das Kommunikations-, Präsentations- und Diskussionsforum für alle Wirtschaftsinformatiker im deutschsprachigen Raum. Über 30 Herausgeber garantieren das hohe redaktionelle Niveau und den praktischen Nutzen für den Leser.

Business & Information Systems Engineering

BISE (Business & Information Systems Engineering) is an international scholarly and double-blind peer-reviewed journal that publishes scientific research on the effective and efficient design and utilization of information systems by individuals, groups, enterprises, and society for the improvement of social welfare.

Wirtschaftsinformatik & Management

Texte auf dem Stand der wissenschaftlichen Forschung, für Praktiker verständlich aufbereitet. Diese Idee ist die Basis von „Wirtschaftsinformatik & Management“ kurz WuM. So soll der Wissenstransfer von Universität zu Unternehmen gefördert werden.

Literatur
Zurück zum Zitat Adam M (2022) Accountability-based user interface design artifacts and their implications for user acceptance of AI-enabled services. In: European conference on information systems, Timisoara Adam M (2022) Accountability-based user interface design artifacts and their implications for user acceptance of AI-enabled services. In: European conference on information systems, Timisoara
Zurück zum Zitat Berente N, Gu B, Recker J, Santanam R (2021) Managing artificial intelligence. MIS Q 45(3):1433–1450 Berente N, Gu B, Recker J, Santanam R (2021) Managing artificial intelligence. MIS Q 45(3):1433–1450
Zurück zum Zitat Metcalf J, Moss E, Watkins EA, Singh R, Elish MC (2021) Algorithmic impact assessments and accountability: the co-construction of impacts. In: Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp 735–746. https://doi.org/10.1145/3442188.3445935 Metcalf J, Moss E, Watkins EA, Singh R, Elish MC (2021) Algorithmic impact assessments and accountability: the co-construction of impacts. In: Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp 735–746. https://​doi.​org/​10.​1145/​3442188.​3445935
Zurück zum Zitat Raji ID, Smart A, White RN, Mitchell M, Gebru T, Hutchinson B, Smith-Loud J, Theron D, Barnes P (2020) Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. In: Proceedings of the conference on fairness, accountability, and transparency, pp 33–44. https://doi.org/10.1145/3351095.3372873 Raji ID, Smart A, White RN, Mitchell M, Gebru T, Hutchinson B, Smith-Loud J, Theron D, Barnes P (2020) Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. In: Proceedings of the conference on fairness, accountability, and transparency, pp 33–44. https://​doi.​org/​10.​1145/​3351095.​3372873
Zurück zum Zitat Seppäla A, Birksted T, Mäntymäki M (2021) From ethical AI principles to governed AI. In: Proceedings of the international conference on information systems, Austin Seppäla A, Birksted T, Mäntymäki M (2021) From ethical AI principles to governed AI. In: Proceedings of the international conference on information systems, Austin
Zurück zum Zitat Tomilova A (2021) Barriers to improving algorithmic accountability: an elaborated action design research. In: Proceedings of the Pacific Asia conference on information systems, Dubai Tomilova A (2021) Barriers to improving algorithmic accountability: an elaborated action design research. In: Proceedings of the Pacific Asia conference on information systems, Dubai
Metadaten
Titel
Algorithmic Accountability
verfasst von
David Horneber
Sven Laumer
Publikationsdatum
24.05.2023
Verlag
Springer Fachmedien Wiesbaden
Erschienen in
Business & Information Systems Engineering / Ausgabe 6/2023
Print ISSN: 2363-7005
Elektronische ISSN: 1867-0202
DOI
https://doi.org/10.1007/s12599-023-00817-8

Weitere Artikel der Ausgabe 6/2023

Business & Information Systems Engineering 6/2023 Zur Ausgabe