Companies Fail at Responsible AI
- 15-12-2025
- Risk Management
- In the Spotlight
- Article
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by (Link opens in a new window)
An international study reveals serious shortcomings in the responsible use of enterprise AI. Almost every company has experienced incidents with AI systems, and very few meet the necessary security standards. This has far-reaching consequences.
The number of AI applications is exploding. The development of these tools provides important support to sales staff, but also requires an increasing level of management.
Fukume / Generated with AI / stock.adobe.com
Artificial intelligence has proven to be a double-edged sword for most companies. Following the boom in generative AI (Gen AI), agentic AI systems are now heralding the next phase of autonomous decision-making in business processes. AI agents make decisions independently and act autonomously to achieve goals. But what if they are let loose?
AI Projects Become a Legal Risk
On the one hand, companies recognize the achievable business benefits and are willing to implement Gen AI and Agentic AI in business processes. On the other hand, these projects can backfire enormously. With their black-box properties, generative and agentic AI models push advanced applications to the limits of explainability and bring additional risks and compliance challenges for companies. So the question when using AI is currently not whether something can go wrong, but how serious the consequences will be.
Around 95 percent of all top executives have had to deal with negative consequences from the use of AI in the past two years. The damage ranges from data breaches and inaccurate predictions to bias or non-compliance with legal regulations. This is reported in the “Responsible Enterprise AI in the Agentic Era” report by the Infosys Knowledge Institute, for which more than 1,500 executives in Germany, France, the UK, the US, and Australia provided information.
Reputational Damage Weighs Heavier Than Financial Losses
The high probability of an AI incident should also be a wake-up call for companies in view of the EU AI Act, which has been in force since August 2024 and will bring stricter regulations starting this August. In the future, fines of up to 35 million euros or seven percent of group-wide annual revenue could be imposed. The EU AI Act requires systematic risk management, comprehensive documentation, and reporting obligations for AI incidents. The problem areas most frequently identified in the study and the resulting consequences are:
- 95 percent of C-suite and executive management have experienced negative consequences when using enterprise AI.
- Almost three-quarters rated the damage as “significant.”
- Thirty-nine percent report “severe” or “extremely severe” incidents.
- The most common incidents are: data breaches (33 percent), system failures (33 percent), and inaccurate or harmful predictions (32 percent).
- Seventy-seven percent of the damage was financial loss.
- The average loss amounts to $800,000 over two years.
- Fifty-three percent suffered reputational damage and 46 percent faced legal consequences.
- Reputational damage is perceived as more threatening than financial damage.
Autonomous AI Systems Require Advanced RAI Strategies
AI agents exacerbate the legal situation. Around 86 percent of executives fear additional risks and compliance challenges when using agentic AI. Responsible AI (RAI) refers to the practice of developing and deploying enterprise AI in an ethical, secure, and transparent manner. An effective and proactive RAI approach is essential not only to minimize damage but also to enable business growth. As a result, RAI has become a critical business capability.
Although existing regulations are welcomed by most of those affected, and 78 percent even described RAI as a growth driver, only two percent of all companies surveyed met the full standards for it and can be considered RAI leaders. Another 15 percent meet about three-quarters of the standards (followers), while the vast majority (83 percent) are beginners and only apply RAI in a fragmented manner.
Strategies of Successful Responsible AI Pioneers
RAI expenditures account for an average of 25 percent of total AI costs. Interestingly, at 21 percent, the proportion of RAI leaders is lower than that of followers and leaders. However, most executives believe that RAI funding should be increased by an average of 30 percent. This suggests underinvestment, despite the already significant share of the total AI budget.
The few RAI pioneers are showing how it's done. They report 39 percent lower costs per AI incident and 18 percent lower damage severity. The study proposes four specific steps to evolve responsible AI from a pure compliance function to a growth driver in the era of agentic AI.
From Compliance to Growth in Four Steps |
|
This is a partly automated translation of this german article.