Zum Inhalt

Work and AI 2030

Challenges and Strategies for Tomorrow's Work

  • 2023
  • Buch

Über dieses Buch

In zehn Jahren werden wir die Arbeit mit künstlicher Intelligenz (KI) als selbstverständlicher betrachten als die Nutzung von Mobiltelefonen heute. 78 anerkannte Experten aus Praxis und Forschung geben tiefe Ein- und Ausblicke in den Einfluss künstlicher Intelligenz auf den Arbeitsalltag im Jahr 2030 und erläutern mit praktischen Tipps, wie man sich auf diese Entwicklung vorbereiten kann. Die 41 prägnanten Beiträge decken ein breites Spektrum in dem jeweils untersuchten Bereich ab. Dank einer standardisierten Struktur enthalten sie eine Zusammenfassung des Status quo, konkrete Beispiele, zukünftige Erwartungen, einen Überblick über Herausforderungen und mögliche Lösungen sowie praktische Tipps. Der Band beginnt mit gesellschaftlichen und ethischen Fragen, bevor er rechtliche Überlegungen für Arbeitgeber und Personalverantwortliche sowie die Rechtspflege diskutiert. Die anderen Kapitel untersuchen die Auswirkungen künstlicher Intelligenz auf die Arbeitswelt im Jahr 2030 in den Sektoren Wirtschaft, Industrie, Mobilität und Logistik, Medizin und Pharma sowie (Weiter-) Bildung.

Inhaltsverzeichnis

Nächste
  • current Page 1
  • 2
  • 3
  1. Frontmatter

  2. Social and ethical aspects of AI in the world of work

    1. Frontmatter

    2. The Ghost of German Angst: Are We Too Skeptical for AI Development?

      The Art Figure of the Fearful Enemy of Technology and Courage for Critical Optimism Kai Arne Gondlach, Michaela Regneri
      Abstract
      Artificial intelligence (AI) is one of the most powerful future technologies of our time. Many AI discussions about ethics and innovation speed are based on the assumption that the fear of overpowering AI in general and of mass unemployment in particular paralyzes AI development. In this article, we argue that this “German Angst” is rather a fictional construct that has its home in the fictional representations of AI, but does not occur in social reality. People do not fear AI, but the power of the people who could abuse it. Therefore, organizations and politics should take into account the fundamental openness of the workers and create framework conditions that bring advanced digitization and human into an innovation-friendly, productive interaction.
    3. Practical Guide AI = All Together and Interdisciplinary

      Responsible Innovation for the Integration of Artificial Intelligence Applications into the World of Work Aljoscha Burchardt, Doris Aschenbrenner
      Abstract
      This article shows common problems in the development of AI systems (artificial intelligence), such as fundamental misunderstandings between the user side and the AI development side. These arise from deficits in interdisciplinary collaboration and communication and can lead to undesirable consequences for employees or society as a whole in the long term. As a strategy to avoid these and other misunderstandings and to implement AI in the sense of a responsible innovation process, a process model is proposed, which can serve as a practical guide for using AI in the world of work.
    4. Future Collaboration between Humans and AI

      “I Strive to Make You Feel Good,” Says My AI Colleague in 2030 Frank Fischer
      Abstract
      By 2030, almost all humans and machines will work in teams. These teams have different new aspects, for example, what roles AI and humans can play in the team structure. The objective for using AI in teams will be team success and maintaining the team’s performance. Furthermore, typically employees will have a personal AI for their well-being and performance.
    5. AI, Innovation and Start-ups

      High-Tech Start-Ups as Drivers of AI Ecosystems Annette Miller
      Abstract
      AI innovations that stem from scientific research can have a high economic potential. By supporting the exploitation of innovations with great socio-economic and -ecological impact by start-ups through policy, Germany has the opportunity to transfer the well-positioned scientific research into value creation. Even if only a small fraction of the projects will be successful in the market, the support is essential, as the effect radiates to the entire ecosystem. Because together with the research institutions, deep tech start-ups are drivers of innovation ecosystems. They not only attract established companies, investors and top talents, but are also a source of new start-ups and promote the diffusion of AI in the breadth.
    6. AI Demands Corporate Digital Responsibility (CDR)

      Aligning the Moral Compass for Workers in AI-Enabled Workplaces Saskia Dörr
      Abstract
      The use of artificial intelligence creates risks for equality, fairness, dignity, personal protection and privacy for employees and companies, which contradict today’s corporate values. Corporate Digital Responsibility (CDR) offers solutions to enable trust in corporate action when using AI in the workplace. The article argues that establishing an AI ethics or AI governance is not enough, but rather a framework is needed that leads to competitive advantages. To realign the “moral compass” in the “algorithmic new territory”, companies are advised to implement CDR in the organization.
    7. AI Ethics and Neuroethics Promote Relational AI Discourse

      A Combined Embodiment Approach Strengthens the Socially Integrated Regulation of AI Technology in the World of Work Ludwig Weh, Magdalena Soetebeer
      Abstract
      Based on mathematical models of biological learning processes, computer-based computational algorithms form the basis for ‘Machine Learning’ or ‘Artificial Intelligence’ (AI). Their technological translation offers a variety of applications and promises immense transformative potential for various sectors such as economy, technology and society. Approaches of AI ethics discuss the influence and desirability of such changes, for example, for work processes in affected industries; however, a discourse on the social side effects of technology that is driven purely from a technological perspective neglects the life and human science aspects of its origin as well as its complex impact on psychological, social and cultural systems. An embodiment approach of neuroethics can strengthen these reflexive elements in the AI debate and improve social discourse and agency regarding technology-induced transformations in the world of work.
  3. Legal aspects of AI in the world of work

    1. Frontmatter

    2. Digital Product Monitoring Obligations for Smart Products

      Opportunities and Risks of Digital Product Monitoring for IoT Products Volker Hartmann
      Abstract
      Intelligent products are becoming more networked and autonomous. Product liability and safety are of central importance for such products, as new technologies bring new risks, but also new possibilities of hazard control. The article deals with the question to what extent a digital product monitoring obligation for smart products can be expected in 2030 or to what extent such an obligation can already be derived from existing regulations or regulatory trends.
    3. The Use of AI-Based Speech Analysis in the Application Process

      Patricia Jares, Tobias Vogt
      Abstract
      One of the most common areas of application of AI in the world of work is likely to be the recruitment process. The use of AI-based language analysis can facilitate the tedious screening and filtering of suitable applications by staff (Wherever the grammatically masculine form is used for personal designations, persons of any gender identity are meant.) of the human resources department. This article shows the data protection and anti-discrimination law risks, but also possible solutions for a legally secure use and highlights why the use of AI can even offer an opportunity to reduce discrimination in the application process.
    4. Individual Labour Law Issues in the Use of AI

      Can Kömek
      Abstract
      AI systems will increasingly be used in the employment relationship in the foreseeable future and will take over the employer’s selection and consideration decisions. German labour law generally allows such use. It is up to the legislator and the judiciary to ensure compatibility with European data protection law. Employers must ensure that the respective AI system takes into account existing legal requirements and can transparently reconstruct the criteria of its decision in the event of a legal dispute. For employees, the use of AI does not only entail a danger, but also the opportunity for more objective and qualitatively better decisions.
    5. AI in the Company: Is the Employer or the AI as an e-Person Liable?

      Michael Zeck
      Abstract
      When using artificial intelligence (AI) in companies, there is no room for the construct of the electronic person (e-Person) as a liable legal entity. To avoid their liability, employers must exercise the utmost care when selecting and using AI. If the currently weak AI is replaced by a strong AI in the coming years, it is necessary to discuss new legal concepts early on.
    6. The Co-Determination Right of the Works Council According to § 87 Para. 1 No. 6 BetrVG in the Use of AI Systems in the Company

      An Overview of the Development of Case Law and Challenges in Practice Gerlind Wisskirchen, Marcel Heinen
      Abstract
      The scope of the co-determination right pursuant to § 87 para. 1 no. 6 BetrVG (Works Constitution Act) has been interpreted very broadly by the courts in the past. The decisions of the courts from the past make the introduction of modern AI systems in companies considerably more difficult. This article critically examines this issue and presents possible solutions for a practical and up-to-date co-determination right.
    7. Data Protection Assessment of Predictive Policing in the Employment Context

      Legal Basis and Its Limits Inka Knappertsbusch, Luise Kronenberger
      Abstract
      This article deals with predictive policing as a possibility to create a forecast regarding the probability of committing a crime or breaching duties by a certain employee. Based on the knowledge gained in this way, the employer can take measures that are suitable to prevent or at least minimise the realisation of the predicted risk. However, it is particularly necessary to examine the legal basis on which the employer can rely when using predictive policing. This article examines the general clause of § 26 para. 1 sentence 1 BDSG (German Data Protection Act) and consent pursuant to § 26 para. 2 BDSG as possible legal bases.
    8. Legal Requirements for AI Decisions in Administration and Justice

      Johannes Schmees, Stephan Dreyer
      Abstract
      In view of increasing social complexity and the associated, necessary modernisation and digitalisation of administration and justice, an increased use of artificial intelligence (AI) systems is being considered. Especially in this area, there are distinct limiting requirements or even hurdles for the implementation of AI systems. The article identifies these, describes further challenges and solutions, and ventures a cautious look into future developments of the use of AI by the state.
Nächste
  • current Page 1
  • 2
  • 3
Titel
Work and AI 2030
Herausgegeben von
Inka Knappertsbusch
Kai Gondlach
Copyright-Jahr
2023
Electronic ISBN
978-3-658-40232-7
Print ISBN
978-3-658-40231-0
DOI
https://doi.org/10.1007/978-3-658-40232-7

Informationen zur Barrierefreiheit für dieses Buch folgen in Kürze. Wir arbeiten daran, sie so schnell wie möglich verfügbar zu machen. Vielen Dank für Ihre Geduld.

    Bildnachweise
    Schmalkalden/© Schmalkalden, NTT Data/© NTT Data, Verlagsgruppe Beltz/© Verlagsgruppe Beltz, EGYM Wellpass GmbH/© EGYM Wellpass GmbH, rku.it GmbH/© rku.it GmbH, zfm/© zfm, ibo Software GmbH/© ibo Software GmbH, Sovero/© Sovero, Axians Infoma GmbH/© Axians Infoma GmbH, genua GmbH/© genua GmbH, Prosoz Herten GmbH/© Prosoz Herten GmbH, Stormshield/© Stormshield, MACH AG/© MACH AG, OEDIV KG/© OEDIV KG, Rundstedt & Partner GmbH/© Rundstedt & Partner GmbH, Doxee AT GmbH/© Doxee AT GmbH , Governikus GmbH & Co. KG/© Governikus GmbH & Co. KG