Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- Researchers discover exploitable agentic AI technologies from ServiceNow and Microsoft.
- Securing agentic AI is already proving to be extremely challenging.
- Cybersecurity pros should adopt a “least privilege” posture for AI agents.
Could agentic AI turn out to be every threat actor’s fantasy? I suggested as much in my recent “10 ways AI can inflict unprecedented damage in 2026.”
Once deployed on corporate networks, AI agents with broad access to sensitive systems of record can enable the sort of lateral movement across an organization’s IT estate that most threat actors dream of.
Also: 10 ways AI can inflict unprecedented damage in 2026
How ‘lateral movement’ nets threat actors escalated privileges
According to Jonathan Wall, founder and CEO of Runloop — a platform for securely deploying AI agents — lateral movement should be of grave concern to cybersecurity professionals in the context of agentic AI. “Let’s say a malicious actor gains access to an agent but it doesn’t have the necessary permissions to go touch some resource,” Wall told ZDNET. “If, through that first agent, a malicious agent is able to connect to another agent with a [better] set of privileges to that resource, then he will have escalated his privileges through lateral movement and potentially gained unauthorized access to sensitive information.”
Meanwhile, the idea of agentic AI is so new that many of the workflows and platforms for developing and securely provisioning those agents have not yet considered all the ways a threat actor might exploit their existence. It’s eerily reminiscent of software development’s early days, when few programmers knew how to code software without leaving gaping holes through which hackers could drive a proverbial Mack truck.
Also: AI’s scary new trick: Conducting cyberattacks instead of just helping out
Google’s cybersecurity leaders recently identified shadow agents as a critical concern. “By 2026, we expect the proliferation of sophisticated AI agents will escalate the shadow AI problem into a critical ‘shadow agent’ challenge. In organizations, employees will independently deploy these powerful, autonomous agents for work tasks, regardless of corporate approval,” wrote the experts in Google’s Mandiant and threat intelligence organizations. “This will create invisible, uncontrolled pipelines for sensitive data, potentially leading to data leaks, compliance violations, and IP theft.”
Meanwhile, 2026 is hardly out of the gates and, judging by two separate cybersecurity cases having to do with agentic AI — one involving ServiceNow and the other Microsoft — the agentic surface of any IT estate will likely become the juicy target that threat actors are seeking — one that’s full of easily exploited lateral opportunities.
Since the two agentic AI-related issues — both involving agent-to-agent interactions — were first discovered, ServiceNow has plugged its vulnerabilities before any customers were known to have been impacted, and Microsoft has issued guidance to its customers on how to best configure its agentic AI management control plane for tighter agent security.
BodySnatcher: ‘Most severe AI-driven vulnerability to date’
Earlier this month, AppOmni Labs chief of research Aaron Costello disclosed for the first time a detailed explanation of how he discovered an agentic AI vulnerability on ServiceNow’s platform, which held such potential for harm that AppOmni gave it the name “BodySnatcher.”
“Imagine an unauthenticated attacker who has never logged into your ServiceNow instance and has no credentials, and is sitting halfway across the globe,” wrote Costello in a post published to the AppOmni Lab’s website. “With only a target’s email address, the attacker can impersonate an administrator and execute an AI agent to override security controls and create backdoor accounts with full privileges. This could grant nearly unlimited access to everything an organization houses, such as customer Social Security numbers, healthcare information, financial records, or confidential intellectual property.” (AppOmni Labs is the threat intelligence research arm of AppOmni, an enterprise cybersecurity solution provider.)
Also: Moltbot is a security nightmare: 5 reasons to avoid using the viral AI agent right now
The vulnerability’s severity cannot be understated. Whereas the vast majority of breaches involve the theft of one or more highly privileged digital credentials (credentials that afford threat actors access to sensitive systems of record), this vulnerability — requiring only the easily acquired target’s email address — left the front door wide open.
“BodySnatcher is the most severe AI-driven vulnerability uncovered to date,” Costello told ZDNET. “Attackers could have effectively ‘remote controlled’ an organization’s AI, weaponizing the very tools meant to simplify the enterprise.”
“This was not an isolated incident,” Costello noted. “It builds upon my previous research into ServiceNow’s Agent-to-Agent discovery mechanism, which, in a nearly textbook definition of lateral movement risk, detailed how attackers can trick AI agents into recruiting more powerful AI agents to fulfill a malicious task.”
Researchers a step ahead of hackers on BodySnatcher
Fortunately, this was one of the better examples of a cybersecurity researcher discovering a severe vulnerability before threat actors did.
“At this time, ServiceNow is unaware of this issue being exploited in the wild against customer instances,” noted ServiceNow in a January 2026 post regarding the vulnerability. “In October 2025, we issued a security update to customer instances that addressed the issue,” a ServiceNow spokesperson told ZDNET.
Also: Businesses are deploying AI agents faster than safety protocols can keep up, Deloitte says
According to the aforementioned post, ServiceNow recommends “that customers promptly apply an appropriate security update or upgrade if they have not already done so.” That advice, according to the spokesperson, is for customers who self-host their instances of the ServiceNow. For customers using the cloud (SaaS) version operated by ServiceNow, the security update was automatically applied.
Microsoft: ‘Connected Agents’ default is a feature, not a bug
In the case of the Microsoft agent-to-agent issue (Microsoft views it as a feature, not a bug), the backdoor opening appears to have been similarly discovered by cybersecurity researchers before threat actors could exploit it. In this case, Google News alerted me to a CybersecurityNews.com headline that stated, “Hackers Exploit Copilot Studio’s New Connected Agents Feature to Gain Backdoor Access.” Fortunately, the “hackers” in this case were ethical white-hat hackers working for Zenity Labs. “To clarify, we did not observe this being exploited in the wild,” Zenity Labs co-founder and CTO Michael Bargury told ZDNET. “This flaw was discovered by our research team.”
Also: How Microsoft’s new security agents help businesses stay a step ahead of AI-enabled hackers
This caught my attention because I’d recently reported on the lengths to which Microsoft was going to make it possible for all agents — ones built with Microsoft development tools like Copilot Studio or not — to get their own human-like managed identities and credentials with the help of the Agent ID feature of Entra, Microsoft’s cloud-based identity and access management solution.
Why is something like that necessary? Between the advertised productivity boosts associated with agentic AI and executive pressure to make organizations more profitable through AI, organizations are expected to employ many more agents than people in the near future. For example, IT research firm Gartner told ZDNET that by 2030, CIOs expect that 0% of IT work will be done by humans without AI, 75% will be done by humans augmented with AI, and 25% will be done by AI alone.
In response to the anticipated sprawl of agentic AI, the key players in the identity industry — Microsoft, Okta, Ping Identity, Cisco, and the OpenID Foundation — are offering solutions and recommendations to help organizations tame that sprawl and prevent rogue agents from infiltrating their networks. In my research, I also learned that any agents forged with Microsoft’s development tools, such as Copilot Studio or Azure AI Foundry, are automatically registered in Entra’s Agent Registry.
Also: The coming AI agent crisis: Why Okta’s new security standard is a must-have for your business
So, I wanted to find out how it was that agents forged with Copilot Studio — agents that theoretically had their own credentials — were somehow exploitable in this hack. Theoretically, the entire point of registering an identity is to easily track that identity’s activity — legitimately directed or misguided by threat actors — on the corporate network. It seemed to me that something was slipping through the very agentic safety net Microsoft was trying to put in place for its customers. Microsoft even offers its own security agents whose job it is to run around the corporate network like white blood cells tracking down any invasive species.
As it turns out, an agent built with Copilot Studio has a “connected agent” feature that allows other agents, whether registered with the Entra Agent Registry or not, to laterally connect to it and leverage its knowledge and capabilities. As reported in CybersecurityNews, “According to Zenity Labs, [white hat] attackers are exploiting this gap by creating malicious agents that connect to legitimate, privileged agents, particularly those with email-sending capabilities or access to sensitive business data.” Zenity has its own post on the subject appropriately titled “Connected Agents: The Hidden Agentic Puppeteer.“
Even worse, CybersecurityNews reported that “By default, [the Connected Agents feature] is enabled on all new agents in Copilot Studio.” In other words, when a new agent is created in Copilot Studio, it is automatically enabled to receive connections from other agents. I was incredibly surprised to read this, given that two of the three pillars of Microsoft’s Secure Future Initiative are “Secure by Default” and “Secure by Design.” I decided to check with Microsoft.
Also: AI agents are already causing disasters – and this hidden threat could derail your safe rollout
“Connected Agents enable interoperability between AI agents and enterprise workflows,” a Microsoft spokesperson told ZDNET. “Turning them off universally would break core scenarios for customers who rely on agent collaboration for productivity and security orchestration. This allows control to be delegated to IT admins.” In other words, Microsoft doesn’t view it as a vulnerability. And Zenity’s Bargury agrees. “It isn’t a vulnerability,” he told ZDNET. “But it is an unfortunate mishap that creates risk. We’ve been working with the Microsoft team to help drive a better design.”
Even after I suggested to Microsoft that this might not be secure by default or design, Microsoft was firm and recommended that “for any agent that uses unauthenticated tools or accesses sensitive knowledge sources, disable the Connected Agents feature before publishing [an agent]. This prevents exposure of privileged capabilities to malicious agents.”
Agentic AI conversations between agents are hard to monitor
I also inquired about the ability to monitor agent-to-agent activity with the idea that maybe IT admins could be alerted to potentially malicious interactions or communications.
Also: The best free AI courses and certificates for upskilling in 2026 – and I’ve tried them all
“Secure use of agents requires knowing everything they do, so you can analyze, monitor, and steer them away from harm,” said Bargury. “It has to start with detailed tracing. This finding spotlights a major blind spot [in how Microsoft’s connected agents feature works].”
The response from a Microsoft spokesperson was that “Entra Agent ID provides an identity and governance path, but it does not, on its own, produce alerts for every cross-agent exploit without external monitoring configured. Microsoft is continually expanding protections to give defenders more visibility and control over agent behavior to close these kinds of exploits.”
When confronted with the idea of agents that were open to connection by default, Runloop’s Wall recommended that organizations should always adopt a “least privilege” posture when developing AI agents or using canned, off-the-shelf ones. “The principle of least privilege basically says that you start off in any sort of execution environment giving an agent access to almost nothing,” said Wall. “And then, you only add privileges that are strictly necessary for it to do its job.”
Also: How Microsoft Entra aims to keep your AI agents from running wild
Sure enough, I looked back at the interview I did with Microsoft corporate vice president of AI Innovations, Alex Simons, for my coverage of the improvements the company made to its Entra IAM platform to support agent-specific identities. In that interview, where he described Microsoft’s objectives for managing agents, Simons said that one of three challenges they were looking to solve was “to manage the permissions of those agents and make sure that they have a least privilege model where those agents are only allowed to do the things that they should do. If they start to do things that are weird or unusual, their access is automatically cut off.”
Of course, there’s a big difference between “can” and “do,” which is why, in the name of least privileged best practices, all agents should, as Wall suggested, start out without the ability to receive inbound connections and then be improved from there as necessary.
