What the OpenAI Threat Report reveals about how generative AI has transformed online scams
Somewhere in Southeast Asia, a scam operator is running quality assurance on phishing emails. Not writing them. Testing them. The AI already drafted the copy in four languages, matched the tone to regional dialects, and generated variations for A/B testing. The operator’s job is to review the output, remove any telltale punctuation that might flag the text as machine-generated, and push the campaign live.
This is what fraud looks like in 2025: not lone hackers hunched over keyboards, but production lines staffed by people whose primary skill is supervising automation. When OpenAI released its October 2025 threat intelligence report, the headlines focused on what the company didn’t find: no evidence that AI was enabling fundamentally new attack types. But that framing obscures a more unsettling reality. The technology has industrialized fraud at a scale that changes the economics of the entire threat landscape.
The fraud multiplier nobody asked for
OpenAI’s October 2025 report catalogued the disruption of more than 40 threat networks since 2024, and the pattern across them was consistent: none were inventing novel exploits. All were optimizing existing ones.
These operations used AI for rapid translation and localization of phishing content across Korean, Chinese, Japanese, and English, tailoring scams to regional expectations and cultural cues. The same tools automated post-compromise operations, generating code snippets for credential parsers and in-memory loaders. They collapsed the distance between concept and campaign, producing professional correspondence and entire websites with the kind of stylistic precision that used to require specialists. The result isn’t a new category of threat so much as an old category running at industrial velocity.
Deloitte’s Center for Financial Services projects that fraud losses facilitated by generative AI will climb from $12.3 billion in 2023 to $40 billion by 2027, while the FBI’s Internet Crime Complaint Center recorded $16.6 billion in cybercrime losses in 2024 alone, a 33 percent year-over-year increase. These numbers don’t reflect a shift in tactics so much as a shift in throughput: the same social engineering techniques that have worked for decades now execute faster, cleaner, and at volumes that overwhelm traditional detection.
Why content-based detection is losing
One detail from the October report should give pause to anyone building detection systems: threat actors are manually scrubbing em-dashes from AI-generated text because they’ve learned that security researchers use punctuation patterns as machine-authorship signals. The criminals are reading the same detection blogs that defenders read, studying the signatures, and adapting their output accordingly.
This cat-and-mouse dynamic has pushed adversaries into what OpenAI researchers call “gray zone” activity. The tasks themselves look benign (translation, text formatting, web copy generation) and none trigger model refusals or raise obvious flags. But when those outputs feed into a credential harvesting pipeline, the distinction between legitimate use and fraud becomes nearly invisible at the content layer.
For defenders, the implication is significant. Content-based detection is losing ground. The signals that matter now are behavioral: coordinated posting patterns, infrastructure reuse, timing anomalies that suggest automation rather than human operators. The report documents adversaries chaining multiple AI services together, using one model for text, another for images, and orchestrating the outputs through automation loops that humans merely supervise. This represents a shift from AI-assisted fraud to AI-orchestrated fraud, where human operators serve primarily as quality control rather than creative drivers.
When trusted platforms become attack infrastructure
The migration patterns in these operations follow a predictable logic: attackers are abandoning obviously malicious infrastructure for platforms that carry inherited trust. Cloud storage services like Google Drive and pCloud now host payload staging. GitHub repositories serve malicious scripts through raw content URLs, exploiting the platform’s reputation and flexible API access. The report documents adversaries building reverse proxy tunnels with secure WebSocket communication, blending command-and-control traffic with enterprise-grade TLS frontends that look indistinguishable from legitimate business activity.
This is the Living Off Trusted Sites model in practice. Phishing pages, rogue applications, and fake executive personas now nest behind complex, multi-layered infrastructure specifically designed to defeat surface-level scanning.
Deepfake Fraud: The call that almost worked
The OpenAI reports document a growing ecosystem of AI-generated executive personas: fake biographies, synthetic endorsements, and long-form posts that mimic professional tone and brand voice. Across financial services, energy, and technology sectors, cloned LinkedIn and X profiles have delivered fraudulent investment pitches, fake procurement requests, and video introductions that exploit the implicit trust of seeing a familiar face.
In July 2024, an executive at Ferrari experienced this firsthand. A series of WhatsApp messages arrived from what appeared to be CEO Benedetto Vigna, discussing an imminent acquisition and urging immediate action on a non-disclosure agreement. The profile picture showed Vigna in his signature pose, arms folded in front of the prancing-horse logo. When a follow-up call came through, the voice on the other end replicated Vigna’s distinctive southern Italian accent with unsettling accuracy. But the executive grew suspicious and asked a simple verification question: what book had Vigna recently recommended? The line went dead. The deepfake couldn’t answer.
Not every company has been as fortunate. In March 2025, a finance director at a Singapore multinational authorized a $499,000 transfer during what appeared to be a Zoom call with the company’s CFO and senior leadership. Every face on the screen was synthetic. Executive impersonation has evolved from a tactical technique into a strategic capability, one that intersects directly with what Gartner has identified as disinformation security: the emerging discipline of defending organizational truth against synthetic manipulation.
The advantage defenders can exploit
Buried in the data is a counterweight worth noting: OpenAI found that its models are used to identify scams three times more often than to commit them. In documented cases, model responses helped users recognize fraudulent activity and suggested appropriate countermeasures before money moved.
This advantage suggests something important about where the technology lands when properly aligned. The same capabilities that enable attackers to generate convincing fraud at scale can enable defenders to detect it at scale — automated pattern recognition, real-time infrastructure correlation, and anomaly detection that would take human analysts weeks to replicate.
The challenge is that detection without response is observation without consequence. According to Sift’s Q2 2025 Digital Trust Index, 82 percent of phishing emails now involve AI assistance, and GenAI-enabled scams rose 456 percent between May 2024 and April 2025. When attackers launch campaigns in minutes, takedown operations measured in days leave the majority of victims exposed during the window when fraud is most active. Speed becomes the variable that determines whether detection translates into protection.
The case for sharing threat intelligence
OpenAI credits its disruption success to something that sounds almost quaint in a landscape defined by adversarial sophistication: sharing information with peers like Anthropic and Google, along with industry partners who can act on threat intelligence before infrastructure rebounds.
The same principle applies beyond AI labs. Information Sharing and Analysis Centers like FS-ISAC and RH-ISAC exist because collective visibility compounds faster than any single organization’s detection capability. When attackers automate campaigns across thousands of targets simultaneously, defenders benefit from automating intelligence distribution just as quickly.
The October 2025 report didn’t reveal that AI had transformed fraud into something unrecognizable. It revealed that fraud had absorbed AI into its existing operational playbook and emerged faster, cleaner, and harder to catch. The automation itself is neutral, but the intent documented across 40-plus disrupted networks is not: it’s to extract value from trust at industrial scale.
For every operator scrubbing em-dashes from AI-generated phishing templates, there’s an opportunity for defenders to build systems that don’t rely on punctuation patterns to identify threats. That’s the engineering challenge now — preserving authenticity at the speed the adversaries have set.
Key Takeaways
- OpenAI’s October 2025 report documents 40+ disrupted networks using AI to scale existing fraud playbooks rather than develop novel attack capabilities
- Threat actors actively adapt to detection research, including removing AI-generated punctuation patterns to evade content-based analysis
- “Gray zone” activity using AI for benign tasks within malicious workflows makes content-level detection increasingly ineffective
- Deloitte projects AI-facilitated fraud losses will reach $40 billion by 2027, up from $12.3 billion in 2023
- Defensive advantage exists: OpenAI’s models identify scams three times more often than they’re used to commit them



