Enterprises around the world are flocking to agentic AI tools in 2026, with research from Microsoft showing that 80% of Fortune 500 companies are already using agents in daily operations.
Agents represent a step change in how enterprises leverage AI, with autonomous bots carrying out tasks on behalf of employees and helping to unlock marked productivity and efficiency gains.
Yet many organisations aren’t fully aware of the potential risks associated with these tools, according to Vasu Jakkal, corporate vice president for Microsoft Security.
Speaking during the opening keynote session at the 2026 RSAC Conference in San Francisco, Jakkal said the integration of agents in customer-facing environments will require a re-evaluation of enterprise risk management.
Trust, Jakkal said, will be crucial for safe and secure deployment of agents, which is why observability, security, and governance will be crucial.
“Humans and agents are working together, and we are only just scratching the surface of AI, but as is always with the case of technology advancement, there will always be those who will use it for nefarious purposes,” she said, adding that the use of AI in malicious activities has reached an “inflection point”.
Microsoft’s own intelligence operations have observed bad actors using AI primarily to improve their “trade craft”. They’re using the technology to curate more efficient phishing lures and debug malware, for example.
“We’ve seen this in operations by North Korean actors Jasper Sleet [and] Coral Sleet, where AI enables sustained, large-scale misuse of legitimate access to things like identity fabrication through social engineering and really long-term persistence at very low cost.”
“Structurally different”
This brave new world of AI-powered malicious activity means cyber defenders now face new considerations when contending with potential risks, Jakkal said.
Indeed, malicious activities as a result of AI aren’t just faster, they’re “structurally different, and in this new reality, security has to change”, Jakkal said.
Jakkal noted that organizations have relied on “layers of siloed point solutions, static policies, and human-reliant response”, but bad actors don’t take this into account.
“They think in graphs, and with agents, they can now operate continuously at machine speed across these graphs,” she explained.
This means enterprise security needs to shift from a traditional approach shoring up specific control points, toward a comprehensive architecture where defense is a proactive, not reactive approach. AI, she said, will be crucial in facilitating this change.
“At Microsoft, we believe the future of security is ambient and autonomous, just like the AI it needs to protect,” Jakkal said.
“You can’t simply turn on security, it has to be something that’s woven deeply into every layer of the AI stack – from agents to apps, to platforms, to infrastructure. It needs to be always on, always there, everywhere.
“We need to use agents. We need to use agents that are continuously discovering, testing and fixing the attack path in an always on self defending loop so defenders can address these attacks before they happen.”
Humans in the loop
Ensuring humans are kept in the loop will be crucial in this process and a key factor in building trust, especially given that IDC research predicts more than 1.3 billion agents will be in operation by 2028.
Areas such as identity security will become more important than ever in ensuring enterprises can keep a close eye on agents while they operate behind the scenes.
“They must be secured with the same vigilance that we use to secure people,” she said.
Similarly, the rise of “double agents” – those that have been manipulated by malicious actors to engage in nefarious activities – have already been observed by Microsoft.
With this in mind, Jakkal expects observability to be a key enterprise focus in the coming years.
“We cannot protect what we cannot see,” she said. “And in this era of agentic AI, organizations will need an observability control plane.”
Observability won’t rest solely with security teams either, she said. Developer teams and IT teams will also require shared controls to shore up identity and data security, and to ensure robust governance of agents.
The stakes are high when it comes to safe and secure agentic AI adoption, Jakkal said, which underlines the need for a trustworthy approach to integration. It will also have long-term positive implications for enterprises, if done correctly.
“As we do this, I know that we will build trust at the very core of our organizations, and security becomes that incredible catalyst for innovation.”
FOLLOW US ON SOCIAL MEDIA
Follow ITPro on Google News and add us as a preferred source to keep tabs on all our latest news, analysis, views, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.