Skip to main content
Top

Companies Fail at Responsible AI

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

An international study reveals serious shortcomings in the responsible use of enterprise AI. Almost every company has experienced incidents with AI systems, and very few meet the necessary security standards. This has far-reaching consequences.

The number of AI applications is exploding. The development of these tools provides important support to sales staff, but also requires an increasing level of management.


Artificial intelligence has proven to be a double-edged sword for most companies. Following the boom in generative AI (Gen AI), agentic AI systems are now heralding the next phase of autonomous decision-making in business processes. AI agents make decisions independently and act autonomously to achieve goals. But what if they are let loose?

On the one hand, companies recognize the achievable business benefits and are willing to implement Gen AI and Agentic AI in business processes. On the other hand, these projects can backfire enormously. With their black-box properties, generative and agentic AI models push advanced applications to the limits of explainability and bring additional risks and compliance challenges for companies. So the question when using AI is currently not whether something can go wrong, but how serious the consequences will be.

Around 95 percent of all top executives have had to deal with negative consequences from the use of AI in the past two years. The damage ranges from data breaches and inaccurate predictions to bias or non-compliance with legal regulations. This is reported in the “Responsible Enterprise AI in the Agentic Era” report by the Infosys Knowledge Institute, for which more than 1,500 executives in Germany, France, the UK, the US, and Australia provided information.

Reputational Damage Weighs Heavier Than Financial Losses

The high probability of an AI incident should also be a wake-up call for companies in view of the EU AI Act, which has been in force since August 2024 and will bring stricter regulations starting this August. In the future, fines of up to 35 million euros or seven percent of group-wide annual revenue could be imposed. The EU AI Act requires systematic risk management, comprehensive documentation, and reporting obligations for AI incidents. The problem areas most frequently identified in the study and the resulting consequences are:

  • 95 percent of C-suite and executive management have experienced negative consequences when using enterprise AI.
  • Almost three-quarters rated the damage as “significant.”
  • Thirty-nine percent report “severe” or “extremely severe” incidents.
  • The most common incidents are: data breaches (33 percent), system failures (33 percent), and inaccurate or harmful predictions (32 percent).
  • Seventy-seven percent of the damage was financial loss.
  • The average loss amounts to $800,000 over two years.
  • Fifty-three percent suffered reputational damage and 46 percent faced legal consequences.
  • Reputational damage is perceived as more threatening than financial damage.

Autonomous AI Systems Require Advanced RAI Strategies

AI agents exacerbate the legal situation. Around 86 percent of executives fear additional risks and compliance challenges when using agentic AI. Responsible AI (RAI) refers to the practice of developing and deploying enterprise AI in an ethical, secure, and transparent manner. An effective and proactive RAI approach is essential not only to minimize damage but also to enable business growth. As a result, RAI has become a critical business capability.

Although existing regulations are welcomed by most of those affected, and 78 percent even described RAI as a growth driver, only two percent of all companies surveyed met the full standards for it and can be considered RAI leaders. Another 15 percent meet about three-quarters of the standards (followers), while the vast majority (83 percent) are beginners and only apply RAI in a fragmented manner.

Strategies of Successful Responsible AI Pioneers

RAI expenditures account for an average of 25 percent of total AI costs. Interestingly, at 21 percent, the proportion of RAI leaders is lower than that of followers and leaders. However, most executives believe that RAI funding should be increased by an average of 30 percent. This suggests underinvestment, despite the already significant share of the total AI budget.

The few RAI pioneers are showing how it's done. They report 39 percent lower costs per AI incident and 18 percent lower damage severity. The study proposes four specific steps to evolve responsible AI from a pure compliance function to a growth driver in the era of agentic AI.

From Compliance to Growth in Four Steps

  1. Learn from RAI leaders who have already experienced a wide range of AI incidents.
  2. Combine product and platform operating models to support collaboration, control, and risk management for responsible agentic AI development.
  3. Implement RAI guardrails in a platform with AI agent development tools, library, and secure data access environment to maximize value and reduce failures.
  4. Establish a proactive RAI office to identify risks, protect models, and provide advisory services.

This is a partly automated translation of this german article. 

Background information for this content

Artificial Intelligence’s (AI’s) Responsible Use: How to Manage Digital Ethicswashing

Artificial intelligence (AI) applications’ increasing development highlights not only several opportunities for companies and stakeholders but also various risks. Specifically, the question arises whether AI’s (ir)responsible use might lead to …

Responsible and Robust AI in Companies

Artificial intelligence (AI) is spreading rapidly and permeating almost all areas of our lives—whether in the form of chatbots, newsfeeds, voice assistants or self-driving cars. Also in the economic work environment, AI is finding more and more …

Fairness in AI Systems Development: Beyond EU AI Act Compliance

Rapid popularisation of Artificial Intelligence (AI) has accelerated initiatives for ethical AI development. In the European Union (EU), the Artificial Intelligence Act (AIA) entered into force on the 1st of August 2024, which has steered the …

Compliance with Regulations, AI, and Functional Safety Standards

  • Open Access

This chapter addresses the crucial aspect of ensuring compliance with AI and functional safety standards, focusing on adherence to the EU AI Act and related regulations. It highlights the importance of maintaining a risk-based approach to classify …

    Image Credits
    AI Systems/© Fukume / Generated with AI / stock.adobe.com, Schmalkalden/© Schmalkalden, NTT Data/© NTT Data, Verlagsgruppe Beltz/© Verlagsgruppe Beltz, ibo Software GmbH/© ibo Software GmbH, Sovero/© Sovero, Axians Infoma GmbH/© Axians Infoma GmbH, genua GmbH/© genua GmbH, Prosoz Herten GmbH/© Prosoz Herten GmbH, Stormshield/© Stormshield, MACH AG/© MACH AG, OEDIV KG/© OEDIV KG, Rundstedt & Partner GmbH/© Rundstedt & Partner GmbH, Doxee AT GmbH/© Doxee AT GmbH , Governikus GmbH & Co. KG/© Governikus GmbH & Co. KG, Vendosoft/© Vendosoft