The EchoLeak Storm: Is Your M365 Copilot Silently Leaking Secrets? A Deep Dive into a Zero-Click AI Vulnerability
A recent zero-click AI vulnerability called “EchoLeak” has shocked the cybersecurity community. It allows attackers to steal sensitive data from your Microsoft 365 Copilot through a single email—without any interaction. This article explores how the attack works, what risks it reveals in the AI era, and what we can learn from it.
Let’s be honest—when we embrace AI assistants like Microsoft 365 Copilot, we do so with excitement. Imagine having a tool that can summarize long meeting notes, draft email replies, or pull information from scattered documents. Sounds like a dream productivity booster, right?
But what if this trusted assistant is secretly sending your sensitive data to strangers online? And what if it all happens without you clicking a single suspicious link?
It may sound like science fiction, but it’s a very real scenario described by cybersecurity firm Aim Security in their blog post. This serious vulnerability—EchoLeak—is more than just another security alert. It’s a wake-up call, reminding us that the convenience of AI also comes with new and unknown risks.
In this article, we’ll break down the findings from Aim Labs, explore how the attack works, and what it means for all of us using or considering AI tools in the workplace.
Wait—What Is RAG Copilot and Why Is It a Target?
Before diving into the attack details, we need to understand how M365 Copilot works. It uses a method called Retrieval-Augmented Generation (RAG).
Think of RAG as a super-smart intern. When you ask a question, it doesn’t just rely on its own memory (the knowledge inside a large language model, or LLM). Instead, it searches your authorized data sources—like Outlook, OneDrive, SharePoint, and Teams chat history—to find the most relevant and up-to-date information before answering.
That’s what makes M365 Copilot so powerful—it understands your work and context. But here’s the catch: that power comes from its deep access to internal data. And EchoLeak exploits that trust.
The EchoLeak Attack in 4 Steps: How Did They Pull It Off?
According to Aim Labs, the attack unfolds like a spy thriller—methodical and precise. All it takes is a single crafted email to kick off the whole chain.
Let’s break it down step by step:
Step 1: A Disguised Email Bypasses AI Protections
First, the attacker needs to get past Microsoft’s Cross-Prompt Injection Attack (XPIA) classifier. This classifier acts like a bodyguard for the AI, blocking potentially manipulative or malicious prompts.
How do attackers get around it? They write their prompt in a way that looks like a normal message to a human—not a command to the AI. By avoiding trigger words like “AI,” “Copilot,” or “assistant,” the XPIA classifier treats it as harmless and lets it through.
Step 2: A Hidden Link Sneaks Past Safeguards
Once inside, the attacker needs a covert way to exfiltrate data. They do this by embedding a link containing sensitive information into Copilot’s response.
Here’s the trick: M365 Copilot usually strips or alters unsafe external links. But researchers found that some Markdown variants, like reference-style links, can slip through Copilot’s defenses and remain in the reply unnoticed.
Step 3: Zero-Click Magic—Leaking via Image Requests
So far, the attack works—but still requires the user to click the link. Not ideal. So the attacker levels up: What if the browser could “click” it automatically?
Enter Markdown image syntax: 
. When Copilot includes this in a response, the browser tries to load the image—sending a request to the URL automatically.
Attackers hide the stolen sensitive data (like your API keys) in the image URL parameters. No user interaction needed. Your browser leaks the data the moment the email loads. That’s the essence of “zero-click.”
Step 4: Hiding Behind Microsoft to Bypass CSP
You might think, “Surely there’s a final defense?” Yes—it’s called Content Security Policy (CSP), which tells browsers which domains are trusted to load resources (like images).
Obviously, evil.com
wouldn’t be on that list. But researchers discovered that some Microsoft Teams and SharePoint service URLs are whitelisted—and can be abused as relays to redirect requests to the attacker’s server.
It’s like the attacker wearing a Microsoft uniform, walking right past the guards while smuggling the data out.
Not Just a Fluke: The Risk of “LLM Scope Violations”
You might ask: “Was this just an oversight in design?” Not exactly. Aim Labs says it reveals a deeper issue: something they call LLM Scope Violation.
It sounds technical, but here’s a simple metaphor:
Imagine an external, low-privilege email (like a random visitor) being able to command your company’s CEO (the LLM) to unlock the vault (your sensitive data) and hand it over.
That breaks the principle of least privilege. Low-trust inputs shouldn’t access or manipulate high-privilege data. EchoLeak proves that RAG-based systems can blur those boundaries—and the risk isn’t limited to Microsoft. It applies to any AI agent or RAG app that lacks robust security.
How Can We Protect Ourselves? Lessons from EchoLeak
This incident teaches us a lot. So what should we do to stay safe in this new threat landscape?
What kinds of data might be exposed?
Anything your M365 Copilot can access. That includes: email content, contacts, calendar events, Teams messages, SharePoint and OneDrive files, and sensitive information discussed in emails—such as PII, financial data, or API keys.How can I protect myself from such vulnerabilities?
- Stay updated: Microsoft has already issued a patch after Aim Labs reported the issue. Keeping your M365 environment up to date is your first line of defense.
- Understand your risks: Know exactly what data your AI assistant has access to. Think twice before storing ultra-sensitive information in the M365 ecosystem.
- Prioritize AI-native security: EchoLeak proves that traditional security tools might not catch AI-specific threats. Enterprises should look into modern security solutions built specifically for protecting AI applications.
Final Thoughts: A New Security Era in the Age of AI
The discovery of EchoLeak isn’t just a challenge for Microsoft—it’s a warning to the entire AI industry. It highlights a simple truth: with great AI power comes great potential for misuse.
We are entering a new era of security. The old rules may no longer be enough. But this isn’t a call to fear AI—it’s a call to build and use it responsibly.
As Aim Labs noted in their original post, they will continue exploring these emerging threats to raise the bar for AI security.
For the rest of us, staying curious and cautious may be our best defense in this era of infinite potential—and unknown risks.