One in two companies halt AI projects: CoreView puts the global figure at 51 percent. Studies show why security, governance, and data issues are slowing down AI initiatives.
One in Two Companies Halt AI Projects Due to Security and Governance Concerns
One in two companies halt AI projects — that is how a development can be summed up that has already become a practical reality for many IT and security teams. CoreView reports that 51 percent of surveyed organizations worldwide and 46 percent in Germany have rolled back AI-driven changes because of security or governance concerns. This refers to changes in live production environments, not a blanket end to all AI initiatives. What makes this report relevant is that it shows AI projects are not judged only by promised benefits, but also by controllability, permissions, approvals, and traceability.
The survey is based on responses from 500 IT and security leaders in organizations with more than 1,000 Microsoft 365 users each. CoreView also references tenant data from environments covering more than 1.6 million users in total. In the same context, the report provides additional figures that help explain why AI-driven changes are being stopped or reversed. Eighty-two percent of respondents describe platform operations as a significant operational burden. Forty-three percent report delayed or failed audits due to slow, incomplete, or manual reporting processes. There is also management resistance to AI adoption when security concerns and missing control mechanisms remain unresolved.
Why One in Two Companies Halt AI Projects
According to CoreView, three operational questions sit at the center of the issue. In the key takeaways from the 2026 State of AI in Microsoft 365 report, the company lays them out clearly: Who reviews AI-driven changes before they affect production processes? How far do the permissions of the systems preparing or executing those changes extend? And can individual actions be clearly traced afterward and rolled back if necessary? As soon as companies cannot answer these questions reliably, the governance burden increases. In environments with many identities, permissions, policies, and configuration changes, AI becomes a governance and operating model issue, not just an automation issue.
That this pattern is not limited to a single report is shown by research from S&P Global. It states that the share of companies abandoning most of their AI initiatives before production rose from 17 to 42 percent within a year. On average, 46 percent of projects are discarded between proof of concept and broad deployment. According to the study, organizations with lower abandonment rates are more likely to prioritize projects based on compliance, risk, and data availability. That brings the same question to the forefront that also appears in the CoreView report: whether an AI project can be deployed productively and create measurable value depends not only on model performance or speed, but on data maturity, governance, and operational control.
There are also structural causes that RAND describes in its own analysis of failed AI projects. The study highlights unclear problem definitions, missing or unsuitable data, too much focus on the technology rather than the actual use case, inadequate infrastructure for data management and deployment, and attempts to apply AI to problems that the chosen approach can only solve to a limited extent. For companies, this points to a recurring pattern:
AI projects are often halted where security issues, data quality, processes, and responsibilities were not planned together with the use case.
One in Two Companies Halt AI Projects Where Controllability Is Missing
The current trend does not point to fading interest in AI. CoreView also reports that 70 percent of IT decision-makers still consider AI-driven administration valuable. The dividing line is therefore not between adoption and rejection, but between productive adoption and controlled adoption. Companies want to use AI, but they slow down where the impact on permissions, configurations, content, or security-relevant processes is not sufficiently safeguarded.
This dividing line matters in day-to-day operations. AI systems do not touch just one isolated process inside a company. They analyze content, prioritize information, support administrative decisions, generate recommendations, or trigger further steps autonomously. That shifts responsibility away from the question of whether a model works in principle and toward the question of how its outputs are handled operationally. Without clear ownership, defined approvals, logged changes, and traceable rollback mechanisms, an efficiency project quickly becomes a governance issue.
The Most Common Reasons Behind Halted AI Projects
Five causes can be derived especially clearly from the available reports.
- First, in many cases there is no precisely defined use case with a traceable business benefit.
- Second, the data foundation is often incomplete, inconsistent, or not sufficiently controlled for productive use.
- Third, permissions, human oversight, and technical approvals are not always planned with the same priority as the AI function itself.
- Fourth, problems emerge in live operations when reporting, auditability, and reviewability do not keep pace with the degree of automation.
- Fifth, projects are more likely to stall where companies apply AI to processes that are not yet standardized enough organizationally.
This last point in particular explains why AI projects in companies are often not fully terminated, but instead paused, reduced in scope, or re-scoped. In many cases, it is not the technology alone that fails, but its integration into real operating processes. As soon as a pilot moves to production scale, the requirements for control, documentation, risk management, and accountability increase significantly. That is where it becomes clear whether AI truly functions as a productivity tool or whether it creates additional overhead for security, compliance, and operations.
What Companies Can Do to Ensure AI Projects Deliver Value
Several measures can be derived from the problems described in the studies for the further expansion of productive AI use. AI projects should only move into production once the use case, data foundation, and success criteria have been clearly defined in advance. Equally important are a tightly scoped rights and roles model, binding approval processes, complete logging, and the ability to trace and roll back changes later in a controlled way. These measures address the very issues repeatedly identified in the reports: unclear responsibilities, overly broad permissions, lack of transparency, and insufficient reviewability.
At the organizational level, a formal AI governance model is a useful foundation. ISO/IEC 42001 sets out the requirements for an AI management system and provides a framework for responsibilities, policies, controls, monitoring, and continual improvement. ISO/IEC 23894 complements this with structured guidance on AI risk management and helps organizations integrate AI-related risks into existing governance and control processes. ISO/IEC 38507 is aimed at boards and senior management and describes how AI use can be governed so that it remains effective, efficient, and acceptable within the organization. For operational implementation, the NIST AI Risk Management Framework provides a practical model for identifying, assessing, managing, and continuously reviewing risks across the lifecycle of AI systems.
Taken together, these approaches help organizations build more resilient AI projects: with clear accountability, documented decisions, traceable controls, and a risk picture that is not created only after rollout. For companies, that means treating governance not as a downstream compliance task, but as a prerequisite for productive and reliable AI use.




