
As adoption of artificial intelligence accelerates, organizations are also realizing that without strong governance, resilient data strategies, and a commitment to quality, AI can become a liability as easily as it can be a competitive advantage.
AvePoint recently unveiled the results of its annual survey, The State of AI in 2025: Go Beyond the Hype to Navigate Trust, Security, and Value, which revealed a striking disconnect between AI ambitions and execution. According to the findings, while organizations race to deploy AI at scale, more than 75 percent experienced AI-related security breaches, and security concerns are forcing deployment delays of up to 12 months.
AvePoint’s research also reveals how enterprises are maturing their AI strategies to build trust, drive value, and lead responsibly in the age of intelligent automation. This year’s research confirms what many organizations are beginning to realize: the real differentiator in AI success isn’t speed—it’s stewardship. The organizations seeing the greatest returns from AI are not those who adopted first, but those who governed best.
According to the research, organizations are experiencing implementation challenges that are stalling AI's progress, including:
- AI deployment delays averaging nearly six months, with some organizations facing rollouts stalled up to 12 months due to data quality and security issues.
- Inaccurate AI output (68.7 percent) and data security concerns (68.5 percent) top the list of factors for why organizations are slowing the rollout of generative AI assistants.
- 32.5 percent identify AI hallucinations as the most extreme threat from generative AI assistants.
- 64.2 percent report employees’ "lack of perceived value" as a major rollout barrier, indicating the difficulties of clearly articulating the value AI creates, and the need for stronger AI enablement programs.
A number of leading cybersecurity experts have provided their thoughts on the state of AI in cybersecurity today:
Diana Kelley, Chief Information Security Officer at Noma Security
"AI risks have rapidly moved from a watch list item to a front-line security concern, especially when it comes to data security and misuse. To manage this emerging threat landscape, security teams need a mature, continuous security approach, which includes blue team programs, starting with a full inventory of all AI systems, including agentic components as a baseline for governance and risk management.
"As vulnerabilities increase, the adoption of an AI Bill of Materials (AIBOM) is the foundation for effective supply chain security and AI vulnerability management. Robust red team and pre-deployment testing remain vital as does runtime monitoring and logging which round out the approach by providing the visibility to detect and in some cases even block, attacks during use."
Nicole Carignan, SVP of Security & AI Strategy, Field CISO at Darktrace
"Before organizations can think meaningfully about AI governance, they need to lay the groundwork with strong data science principles. That means understanding how data is sourced, structured, classified, and secured—because AI systems are only as reliable as the data they’re built on.
"Solid data foundations are essential to ensuring accuracy, accountability, and safety throughout the AI lifecycle.
"As organizations increasingly embed AI tools and agentic systems into their workflows, they must develop governance structures that can keep pace with the complexity and continued innovation of these technologies. However, there is no one-size-fits-all approach. Each organization must tailor its AI policies based on its unique risk profile, use cases, and regulatory requirements.
"That’s why executive leadership for AI governance is essential, whether the organization is building AI internally or adopting external solutions.
"Effective AI governance requires deep cross-functional collaboration. Security, privacy, legal, HR, compliance, data, and product leaders each bring vital perspectives. Together, they must shape policies that prioritize ethics, data privacy, and safety—while still enabling innovation. In the absence of mature regulatory frameworks, industry collaboration is equally critical. Sharing successful governance models and operational insights will help raise the bar for secure AI adoption across sectors.
"The integration of AI into core business operations also has implications for the workforce. Security practitioners—and teams in legal, compliance, and risk—must upskill in AI technologies and data governance. Understanding system architectures, communication pathways, and agent behaviors will be essential to managing risk.
"As these systems evolve, so must governance strategies. Static policies won’t be enough, AI governance must be dynamic, real-time, and embedded from the start. Organizations that treat governance and security as strategic enablers will be best positioned to harness the full potential of AI safely and responsibly.
John Watters, CEO and Managing Partner of iCOUNTER
"Traditional security approaches of updating defenses to combat general threat tactics are no longer sufficient to protect sensitive information and systems. To effectively defend against AI-driven rapid developments in targeted attacks, organizations need more than mere actionable intelligence—they need AI-powered analysis of attack innovations and insights into their own specific weaknesses which can be exploited by external parties."
Randolph Barr, CISO at Cequence Security
"We’re seeing AI rapidly evolve from simple automation to deeply personalized, context-aware assistance—and it’s heading toward an Agentic AI future where tasks are orchestrated across domains with minimal human input.
"Before we even get to AI-specific risks, we have to get the fundamentals right. In the haste to bring AI to market quickly, engineering and product teams often cut corners to meet aggressive launch timelines. When that happens, basic security controls get skipped, and those shortcuts make their way into production.
"So, while organizations are absolutely starting to think about model protections, prompt injection, data leakage, and anomaly detection, those efforts mean little if you haven’t locked down identity, access, and configuration at a foundational level. Security needs to be part of the development lifecycle from day one, not simply an add-on at launch."
Ishpreet Singh, Chief Information Officer at Black Duck
"The rapid development of AI technologies, such as deepfakes, generative AI, and automated bots, enables malicious actors to create highly realistic and targeted false narratives at unprecedented scale and speed.
"These sophisticated disinformation campaigns can quickly influence public perception, distort market realities, and undermine organizational credibility, directly threatening brand value and long-term stakeholder trust."
Ms. Kris Bondi, CEO and Co-Founder of Mimoto
"Utilizing AI for the sake of using AI is destined to fail. Even if it gets fully implemented, if it isn't serving an established need, it will lose support when budgets are eventually cut or reappropriated. Any company considering utilizing AI should consider what problems or challenges they have where if AI is applied, will improve or solve the problem.
"Well training and monitored AI agents can help in the response of security threats. While AI agents have a limited scope in where they can be used effectively, their use will still help reduce the volume of potential security threats that a security team will need to address themselves. In theory, this would enable the security pro to have more time to analyze more complex threats."















