Skip to main content

2023 | Buch

Responsible Artificial Intelligence

Challenges for Sustainable Management

insite
SUCHEN

Über dieses Buch

Artificial intelligence - and social responsibility. Two topics that are at the top of the business agenda.

This book discusses in theory and practice how both topics influence each other. In addition to impulses from the current often controversial scientific discussion, it presents case studies from companies dealing with the specific challenges of artificial intelligence.

Particular emphasis is placed on the opportunities that artificial intelligence (AI) offers for companies from different industries. The book shows how dealing with the tension between AI and challenges caused by new corporate social responsibility creates strategic opportunities and also innovation opportunities. It highlights the active involvement of stakeholders in the design process, which is meant to build trust among customers and the public and thus contributes to the innovation and acceptance of artificial intelligence.

The book is aimed at researchers and practitioners in the fields of corporate social responsibility as well as artificial intelligence and digitalization.

The chapter "Exploring AI with purpose" is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.

Inhaltsverzeichnis

Frontmatter
Artificial Intelligence: Management Challenges and Responsibility
Abstract
AI will have a significant impact on all areas of our lives in the future and already influences us in the present in numerous areas of daily life. AI presents new challenges not only for data scientists but also for managers at all levels. On the one hand, there are new requirements that companies have to meet in the competitive environment, and on the other hand, there are a number of new questions that are associated with the use of AI. The development of use cases and the creation and design of data-based ecosystems are key challenges in the future competition in every industry. The forms of communication and interaction in and between organizations will change significantly and therefore require a timely examination of the consequences and requirements of AI.
Reinhard Altenburger
Artificial Intelligence: Companion to a New Human “Measure”?
Abstract
In the current articles on artificial intelligence, one sees various images of man and society, from total surveillance of society and economy to the further development of human thinking in the form of new creativity, supported by the computing power of artifical intelligence. The legitimate question arises: Is artificial intelligence an opportunity for human liberation or a new golden cage that limits or even restricts human action? New digital products, such as the Meta-Verse, are merging the real with the digital worlds and deepening the bonds between humans and technology. Today’s decisions on shaping the relationship between man and machine will also be relevant for our children and grandchildren. This makes it more important to discuss the relationship between man and machine in the broader triangle of “man-machine-economy.” It is often the economic fields of application that influence the further development of technologies and their acceptance in society.
René Schmidpeter, Christophe Funk
AI Governance for a Prosperous Future
Abstract
Artificial intelligence is the biggest invention of humanity, and the quintessence of the fourth industrial revolution. It is the result of our age-old dream to have a loyal, yet very capable and obedient servant, an equal, at times even a superior intelligence that works, protects and inspires and that we still control to secure and advance our own welfare.
And therein lies the seed of contention. Something that is superior to us can ultimately not be controlled by us but could rather perceive us as it’s resource. A truly dystopian future from a human perspective. Even though this future might be decades or a century away, with AIs still in their infancy, they nevertheless have already demonstrated their transformative power, changing our workplaces, our cars and our homes, selecting our partners and our perception of reality.
And as any tool is also a weapon, we must ensure that AI becomes more tool than weapon and works for humanity and not the other way round.
Corporate AI governance is key to safeguarding the transition to a society in which AI is omnipresent and a blessing, not a curse. It must ensure that its intelligized products and services behave ethically responsible. It must consider the well-being of its employees, customers and business partners wherever AI is deployed.
This is a very tall order, and even more so as with AI we are venturing into the unknown. Only this June (2022) Google sanctioned one of their software developers who claimed that one of the company’s most advanced AI, called LaMDA, should have developed consciousness (https://​www.​gizchina.​com/​2022/​06/​14/​google-employee-suspended-after-saying-that-ai-has-become-conscious/​). AI is a highly sensitive topic.
Therefore, transparency and a well-structured AI governance are imperative to building trust and ensuring that we do not experience a major backlash in this domain by being careless. A backlash that could cost us dearly as AI is the key to a better future, one with less diseases, less suffering and more wealth for all people on the planet.
This article reflects on the many aspects of AI governance and proposes a way to structure it methodically to make it applicable in work processes, products and services always considering the dichotomy between benefits and risks of AI.
Alexander Vocelka
Governance of Collaborative AI Development Strategies
Abstract
The chapter presents a structured overview of inter-organisational, collaborative forms of AI development. This is since the rising competitive pressure to adopt AI pushes companies to address common barriers in AI development. However, these challenges, such as a lack in extensive data sets, sufficient to train an own AI model, or restricted access to human resources, can often hardly be solved by the single organisation. Thus, this contribution suggests for companies to engage in collaborative forms of AI development, encouraging them to jointly develop suitable solutions. To this end, the contribution is structured alongside common AI lifecycle phases. Subsequently, it discusses opportunities and risks of collaborative AI development per development stage, before presenting resulting governance tasks. With this, it presents a contribution to scholars and practitioners alike, offering a structured overview for practice and contributing to closing a research gap for academia. The chapter closes with implications for research and practice and an outlook on avenues for further research.
Sabine Wiesmüller, Mathias Bauer
Responsible AI Adoption Through Private-Sector Governance
Abstract
This contribution examines responsible artificial intelligence (AI) adoption in organisations from a private-sector AI governance perspective. Since an increasing number of organisations adopt AI, society interacts with the technologies more frequently due to higher exposure to AI applications. Consequently, companies are confronted with society’s demand to integrate ethical reflections and the perspectives of diverse stakeholders into their decision-making processes. With this, the need for responsible AI adoption rises, too. Yet, neither existing innovation processes nor AI development models address the adoption and development phases entailed in AI lifecycles regarding iterative ethical reflections from a management perspective. Thus, to contribute to filling this research gap, this chapter firstly highlights the need and current lack of systematically integrating ethical reflection in AI adoption processes. Secondly, it proposes a governance model as a first starting point for developing an instrument for responsible AI adoption in organisations, supporting corporate social responsibility in this regard.
Sabine Wiesmüller, Nele Fischer, Wenzel Mehnert, Sabine Ammon
Mastering Trustful Artificial Intelligence
Abstract
As a counter-thesis to the naive general narrative that artificial intelligence (AI) is a hyped super-technology which can solve all problems and even overtake human intelligence, this article discusses five essential problem areas associated with AI technology: modeling ability, how do we derive models of the real world from data and how do we create a model without prejudices and errors?; verifiability, how do we verify the AI algorithms?; explainability, how can we understand the decision-making process of AI systems?; ethics, how do we guarantee compliance with ethical principles and values?; and finally, responsibility, who is responsible for the decisions made by the AI system? We also discuss fundamental threat scenarios in the context of our information society, as well as the limits of AI technology compared with human intelligence. The article highlights that the development of AI technology, as well as related policies and regulations, must be organized to ensure its socially acceptable use and rule out any improper use. Furthermore, the article provides a philosophical discussion on the limits of AI and the diversity of life, and shows that we bear the ultimate responsibility for what machines do. This paper concludes by arguing that even the most sophisticated machine will probably never be able to match humans in terms of their multi-dimensionality of cognition, emotion, and physicality and in their sensual perception of the world.
Helmut Leopold
Technology Serves People: Democratising Analytics and AI in the BMW Production System
Abstract
Individualisation and an increasing number of variants characterise the production of premium vehicles in particular. This implies an increase in complexity in manufacturing. The principles of lean production form the basis for the continuous improvement of processes. Further optimisation of production can be achieved with data analytics and artificial intelligence (AI) methods.
These innovations raise the issue of corporate social responsibility, CSR. This article describes the BMW Group’s approach to corporate social responsibility and the sustainable development of digitalisation in production. In addition to the technical aspects of data analytics and AI, the organisational implications are highlighted. For the BMW Group, the claim that ‘technology serves people’ means that production employees must understand the quality figures in their area and be able to independently carry out a root cause analysis in the event of an error. AI systems must be designed intuitively so that employees can tailor them to their specific application in self-service.
The BMW Group places people—in the production system, especially the direct production employees—at the centre. Data analytics and AI must contribute to making work in the BMW production system more pleasant and even more attractive. The goal is a strength-based division of labour between humans and IT systems.
Matthias Schindler, Frederik Schmihing
Sustainability and Artificial Intelligence in the Context of a Corporate Startup Program
Abstract
“If we do not work with startups we will have no future” is the main goal behind this 2017 developed startup program of Deutsche Telekom.
Frank Barz, Hans Elstner, Benedict Ilg

Open Access

Exploring AI with Purpose
Abstract
Never get complacent: Developing AI solutions doesn’t just take expertise. It also means fostering an intrapreneurial work culture while keeping in mind the greater good our work serves. That’s what we do at the Siemens AI Lab.
Benno Blumoser
Developing Responsible AI Business Model
Abstract
In the age of mobile apps, Internet of things, or connected devices, we are in the process of creating more data by 2025, than the data generated cumulatively during 2011–2020. Data is one of the precious resources in today’s time and cannot be undermined (Antonio Neri, March 2020). Building and developing data and artificial intelligence & machine learning businesses that are based on Responsible Artificial Intelligence (AI) business models are critical to not just enable better sustainable businesses, but also be valued by stakeholders at large. This chapter will cover the approach towards Responsible AI business models.
Sundaraparipurnan Narayanan
ESG Fingerprint: How Big Data and Artificial Intelligence Can Support Investors, Companies, and Stakeholders?
Abstract
Current research is investigating the extent to which measurements of corporate sustainability through environmental-social-governance (ESG) controversies have an impact on a company’s valuation. Early investors, stakeholders, and companies are already using the ESG data and ratings generated to inform their investments or strategic decisions in companies. On the other hand, it is apparent that measurements are often based on static indicators collected annually. Furthermore, analysis has shown that mainstream ESG ratings lack a consistent ESG framework. Similarly, ESG rating indicators are often predefined and do not provide users with sufficient transparency to integrate them into their daily business processes. This is where this chapter comes in and develops an ESG taxonomy based on historical ESG events from which risk patterns, the so-called ESG fingerprint, are automatically extracted. These help to reduce complexity and enable the design of artificial intelligence-based ESG information systems that map the risk management process across phases.
Pajam Hassan, Frank Passing, Jorge Marx Goméz
It’s Only a Bot! How Adversarial Chatbots can be a Vehicle to Teach Responsible AI
Abstract
We are currently witnessing an ever-growing entanglement of intelligent technology with people in their everyday lives, creating intersections with ethics, trust, and responsibility. Understanding, implementing, and designing human interactions with these technologies is central to many advanced uses of intelligent and distributed systems and is related to contested concepts, such as various forms of agency, shared decision-making, and situational awareness. Numerous guidelines have been proposed to outline points of concern when building ethically acceptable artificial intelligence (AI) systems. However, these guidelines are usually presented as general policies, and how we can teach computer science students the needed critical and reflective thinking on the social implications of future intelligent technologies is not obvious. This chapter presents how we used adversarial chatbots to expose computer science students to the importance of ethics and responsible design of AI technologies. We focus on the pedagogical goals, strategy, and course layout and reflect how this can serve as a blueprint for other educators in broader responsible innovation contexts, e.g., nonchat AI technologies, robotics, and other human-computer interaction (HCI) themes.
Astrid Weiss, Rafael Vrecar, Joanna Zamiechowska, Peter Purgathofer
Concerted Actions to Integrate Corporate Social Responsibility with AI in Business: Two Recommendations on Leadership and Public Policy
Abstract
Businesses are increasingly adopting AI solutions. Governments, investors and consumers increasingly focus on their accountability for the environmental and social impact of their activities. To address this challenge, corporate social responsibility should be integrated with AI in business by design and by default. This chapter attempts to contribute to this goal providing two recommendations addressing leadership and public policy. Firstly, leaders can adopt a three-level mindset framework. Such framework embeds ethical considerations and the Sustainable Development Goals as a benchmark for impact assessments in the whole lifecycle of AI. Secondly, AI regulation and policy harmonisation can facilitate the adoption of such framework by businesses and consequently the maximisation of positive externalities of AI in business. The two recommendations are contextualised with insights from a dialogue with four projects in Latin America using AI for the Sustainable Development Goals.
Francesca Mazzi
AI and Leadership: Automation and the Change of Management Tasks and Processes
Abstract
Until now, executives have mainly focused on the management of “human intelligence” (HI versus AI) in the company. However, executives increasingly need to shift towards automating processes and routines using AI. A step-by-step approach includes the identification of challenges, effectiveness and efficiency potentials in the company and the search for solutions, the introduction of AI solutions with a sustainable impact on the daily work of the employees and the flow of business processes and the establishment and further development of the employees and the organisation with the progressive use of AI within the organisation. These extensive and often far-reaching changes request for new management skills and knowhow, both professionally and personally. A case study in the area of continuous environment analysis for companies illustrates a concrete application that shows that AI and human intelligence complement each other very well and open up new possibilities with regard to the effectiveness and efficiency of leadership, which ultimately should always be geared towards maintaining and creating competitive advantages and future viability.
Isabell Claus, Matthias Szupories
Achieving CSR with Artificially Intelligent Nudging
Abstract
No longer limited to the factory hall, automation and digitization increasingly change, complement, and replace the human workplace also in the sphere of knowledge work. Technology offers the possibility of creating economically rational, autonomously acting software—the machina economica. This complements human beings who are far from being a rational homo economicus and whose behavior is biased and prone to errors. This includes behaviors that lack responsibility and sustainability. Insights from behavioral economics suggest that in the modern workplace, humans who team up with a variety of digital assistants can improve their decision-making to achieve more corporate social responsibility. Equipped with artificial intelligence (AI), machina economica can nudge human behavior to arrive at more desirable outcomes. Following the idea of augmented human-centered management (AHCM), this chapter outlines underlying mechanisms, opportunities, and threats of AI-based digital nudging.
Dirk Nicolas Wagner
Metadaten
Titel
Responsible Artificial Intelligence
herausgegeben von
René Schmidpeter
Reinhard Altenburger
Copyright-Jahr
2023
Electronic ISBN
978-3-031-09245-9
Print ISBN
978-3-031-09244-2
DOI
https://doi.org/10.1007/978-3-031-09245-9

Premium Partner