The same brand impersonation playbook that targets banks and retailers has found a new category of victim: the AI platforms that hundreds of millions of people are learning to trust.
In early April 2026, researchers at Malwarebytes discovered a fake website impersonating Anthropic’s Claude AI platform. The site presented itself as an official download page for a “Pro” version of Claude, complete with branding that closely matched the legitimate site, and offered visitors a file called Claude-Pro-windows-x64.zip. Users who downloaded and installed the package received a working copy of the Claude application, which launched normally, while in the background the installer deployed a PlugX remote access trojan that gave attackers persistent control of the compromised system.
The only visible indicator that something was wrong was a misspelled folder name, “Cluade” rather than “Claude,” buried in the program files directory where most users would never look.
The campaign was not an isolated effort. Researchers documented at least two additional operations targeting Claude users through different delivery mechanisms in the same period, one via Google Ads and another through a trojanized VS Code extension. And Claude is not the only AI platform facing this problem.
How OpenAI and ChatGPT face the same impersonation threat
OpenAI’s ChatGPT has been the target of sustained impersonation campaigns through a range of vectors that mirror what has targeted financial institutions and retailers for years. Barracuda documented a large-scale phishing operation impersonating OpenAI with urgent emails requesting updated payment information, a technique indistinguishable from the fake billing alerts that have targeted bank customers for decades. Malwarebytes and Appknox found app stores flooded with fake ChatGPT clones ranging from adware-laden DALL-E impersonators to full malware packages disguised as AI tools.
In January 2026, a more targeted campaign emerged. Nine days after OpenAI publicly announced plans to test advertising within ChatGPT, attackers launched a fraudulent “OpenAI Advertising GPT” via Apple’s TestFlight platform, specifically targeting digital marketers and growth leads. The timing was not accidental: the attackers watched a real corporate announcement and built a fake version of it within days, borrowing the credibility of a genuine business development that the target audience had every reason to expect. It is the same technique financial institution impersonators use when banks announce mergers, product launches, or policy changes.
Perhaps the most revealing example came from Huntress in December 2025. Researchers investigating an Atomic macOS Stealer (AMOS) infection traced the delivery mechanism to a real ChatGPT conversation hosted on OpenAI’s actual platform. Attackers had created legitimate content on the platform itself, then weaponized it through SEO manipulation to drive victims to the page. The interface looked authentic because it was authentic. The brand impersonation in this case did not involve replicating the brand’s digital presence. It involved using the brand’s own infrastructure as the delivery mechanism, a pattern consistent with the trusted platform abuse that has reshaped phishing infrastructure across every sector.
Why AI platforms became targets this fast
A year ago, most AI platforms were still novelties that users experimented with occasionally. Today, Claude alone receives roughly 290 million monthly web visits. The transition from curiosity to daily infrastructure happened faster than the equivalent shift for online banking or e-commerce, and that speed matters because brand impersonation succeeds when the attacker can borrow the credibility of an entity the victim has reason to trust. When a user receives an email offering a desktop installer for an AI tool they use every day, the social engineering does not need to create urgency or exploit fear. It needs only to look like a routine software update.
The targeting also exploits a gap in how users evaluate legitimacy. AI platforms are newer than banks or retailers, their distribution channels are less established in users’ mental models, and the distinction between a legitimate download from an official site and a third-party installer is less intuitive for a category of software that most people have used for less than two years. As the 7ai research team observed in their analysis of the Claude campaigns, the relevant question is no longer whether a domain looks legitimate but who placed the content there and why.
The brand impersonation playbook behind the malware
The Claude and ChatGPT campaigns have been covered extensively as malware stories. But the mechanism that delivered the malware is worth examining separately, because it is not new at all. The PlugX payload deployed through the fake Claude installer has been in circulation since 2008. The AMOS stealer delivered through the weaponized ChatGPT conversation is a commodity tool available on criminal marketplaces. What made both campaigns effective was not the sophistication of the payload but the credibility of the brand being borrowed.
Check Point’s Q1 2026 data confirms that technology companies broadly now dominate global phishing volume, and AI companies are the newest entrants to that category. Banks, retailers, and government agencies have had years to develop detection and response capabilities against sustained impersonation campaigns. AI companies are encountering the same problem on a compressed timeline, and most are not yet equipped with the brand protection infrastructure that mature targets in other sectors have built. The question for any organization is whether brand protection capabilities develop on the same timeline as brand trust, or whether there is a gap between the two that attackers can exploit. The transition from novel tool to trusted infrastructure is the window in which that investment matters most.
The Bottom Line
AI platforms have crossed the threshold from novelty to trusted infrastructure, and the impersonation campaigns targeting them are following the same playbook that has targeted banks and retailers for years. Fake installers, fraudulent billing emails, cloned apps, and weaponized content hosted on the platforms themselves are not new kinds of attacks. They are familiar attacks aimed at a new kind of target, one that gained the trust of hundreds of millions of users faster than anyone built the brand protection infrastructure to defend it.
Key Takeaways
Yes, across multiple platforms and vectors. Malwarebytes documented a fake Claude AI installer distributing PlugX malware. Barracuda found large-scale phishing campaigns impersonating OpenAI. Huntress traced an AMOS stealer infection to a weaponized ChatGPT conversation hosted on OpenAI’s own platform. The pattern spans both major AI platforms.
AI platforms have rapidly become daily-use infrastructure for hundreds of millions of people, carrying implicit trust comparable to banks or email providers. The speed of this adoption compressed the timeline on which brand trust develops, creating a gap between how much users trust these platforms and how mature the brand protection infrastructure around them is.
Nine days after OpenAI announced plans to test advertising in ChatGPT, attackers launched a fraudulent “OpenAI Advertising GPT” app via Apple’s TestFlight, targeting digital marketers who had every reason to expect the real product. The technique mirrors how financial institution impersonators exploit bank mergers and product launches.
Attackers created a real conversation on OpenAI’s platform, then used SEO manipulation to drive victims to it. The interface looked authentic because it was hosted on the genuine platform, making it a form of trusted infrastructure abuse rather than traditional site cloning.
Any category of company gaining trust at scale is simultaneously building the asset attackers will borrow. The AI platform campaigns follow the same mechanics as bank and retail impersonation but target a category where brand protection capabilities have not yet caught up to the speed of brand trust development.



