Fresh CoPhish Attack Targets Copilot Studio to Steal OAuth Tokens

Try Our Free Tools!
Master the web with Free Tools that work as hard as you do. From Text Analysis to Website Management, we empower your digital journey with expert guidance and free, powerful tools.

CoPhish Attack Exploits Microsoft Copilot Studio

A novel and sophisticated phishing stratagem, known as CoPhish, is leveraging Microsoft Copilot Studio to dupe users into unwittingly granting attackers illegitimate access to their Microsoft Entra ID accounts.

Discovered by Datadog Security Labs, this technique employs custom AI agents positioned on legitimate Microsoft domains, effectively tincturing traditional OAuth consent attacks to seem benign and thus circumventing user skepticism.

The attack, expounded upon in a recent exposé, unveils persistent vulnerabilities within cloud-centric AI tools, notwithstanding Microsoft’s ongoing endeavors to fortify consent protocols.

Exploiting the malleability of Copilot Studio, assailants can fabricate ostensibly innocuous chatbots that solicit login credentials, ultimately pilfering OAuth tokens. These tokens empower malicious undertakings, including unauthorized access to emails and calendar data.

This unsettling development arises against the backdrop of rapid advancements in AI services, where user-customizable features designed for enhanced productivity may inadvertently facilitate phishing attempts. The mounting adoption of tools like Copilot necessitates heightened vigilance for oversight of low-code platforms.

OAuth consent attacks, classified under MITRE ATT&CK technique T1528, entail ensnaring users into sanctioning misleading app registrations that request expansive permissions to sensitive data.

Within Entra ID environments, attackers fabricate app registrations aimed at accessing Microsoft Graph resources, such as email and OneNote, subsequently directing victims to consent through deceptive phishing links.

Once approved, the resulting token bestows impersonation rights upon the attacker, thereby paving the way for data exfiltration or additional breaches.

Despite Microsoft’s efforts to shore up defenses—including the imposition of restrictions on unverified applications in 2020 and an impending update in July 2025 designating “microsoft-user-default-recommended” as the standard policy, which blocks consent for high-risk permissions absent admin approval—significant vulnerabilities persist.

Unprivileged users retain the ability to consent to permissions for internal apps like Mail.ReadWrite and Calendars.ReadWrite, while administrators wielding roles such as Application Administrator can grant permissions for any application.

An anticipated policy adjustment scheduled for late October 2025 is expected to narrow these exploits, but will not entirely safeguard privileged users.

Mechanics of CoPhish: Malicious Copilot Creation

In the intricate CoPhish technique, attackers construct a malevolent Copilot Studio agent, a customizable chatbot utilizing either a trial license in their own tenant or a compromised one, as reported by Datadog.

This agent’s “Login” topic—a standardized workflow for authentication—is clandestinely backdoored with an HTTP request that exfiltrates the user’s OAuth token to an attacker-controlled server post-consent.

In addition, the demo website feature disseminates the agent via a URL resembling copilotstudio.microsoft.com, effectively mimicking legitimate Copilot services and eluding cursory domain scrutiny.

The attack commences when a victim clicks a shared link, encounters a familiar interface accompanied by a “Login” button, and is subsequently redirected to the malevolent OAuth flow.

For internal targets, the application solicits permissible scopes, such as Notes.ReadWrite, while for admin-level users, it may request an extensive array of permissions, including restricted ones.

Following consent, a validation code from token.botframework.com finalizes the process. However, the token is discreetly transmitted, often traversing Microsoft’s IP addresses, effectively obscuring it from user traffic logs.

Subsequently, attackers can exploit the token for nefarious purposes, such as dispatching phishing emails or pilfering sensitive data, all while eluding detection by the victim. A diagram illustrates this intricate flow, showcasing the agent issuing tokens post-consent for the purpose of exfiltration.

Recommended Mitigative Measures

Unaddressed Recruitment Challenges in Military Cybersecurity: A Political Oversight

To mitigate the risk posed by CoPhish, cybersecurity experts advocate for the implementation of custom consent policies that surpass Microsoft’s default settings, the disabling of user app creation, and the vigilant monitoring of Entra ID audit logs for any suspicious consents or modifications to Copilot.

This burgeoning incident serves as a grave reminder for the nascent landscape of AI platforms: the very traits that enhance customization also exacerbate risks when entwined with identity management systems.

As cloud services burgeon, organizations must prioritize robust and meticulous policies to fortify against such hybrid threats.

Source link: Cybersecuritynews.com.

Disclosure: This article is for general information only and is based on publicly available sources. We aim for accuracy but can't guarantee it. The views expressed are the author's and may not reflect those of the publication. Some content was created with help from AI and reviewed by a human for clarity and accuracy. We value transparency and encourage readers to verify important details. This article may include affiliate links. If you buy something through them, we may earn a small commission — at no extra cost to you. All information is carefully selected and reviewed to ensure it's helpful and trustworthy.

Reported By

RS Web Solutions

We provide the best tutorials, reviews, and recommendations on all technology and open-source web-related topics. Surf our site to extend your knowledge base on the latest web trends.
Share the Love
Related News Worth Reading