The Preventing Deep Fake Scams Act would create the first federal task force dedicated to AI fraud in financial services. The threat it addresses is already producing nine-figure losses per quarter.
In November 2024, the Financial Crimes Enforcement Network issued an alert about deepfake fraud that raised some eyebrows. Rather than warning about AI in the abstract, FinCEN described schemes already underway: criminals using generative AI to fabricate identity documents, bypass verification controls, and open fraudulent accounts. The agency went a step further and created a dedicated SAR filing keyword, FIN-2024-DEEPFAKEFRAUD, signaling that it considered deepfake fraud a distinct category worth tracking on its own.
This was a notable departure from standard protocol: a regulator responding to something it was already seeing, not preparing for something it expected to see eventually. It also highlighted a disconnect in how the federal government is handling AI-driven fraud. Individual agencies like FinCEN and the NYDFS have begun issuing operational guidance on their own, driven by what supervised institutions are reporting in real time. But there is no coordinated federal framework connecting those efforts, no shared definitions, and no unified strategy.
That is the gap the Preventing Deep Fake Scams Act is designed to address. First introduced in 2023 as H.R. 5808 and reintroduced in February 2025 as both a House and Senate bill, the legislation would establish a task force drawing representatives from seven federal agencies, including Treasury, the Federal Reserve, and FinCEN itself. The task force has a one-year deadline to produce a report on how AI both benefits and endangers financial institutions. It represents the first coordinated federal effort to address deepfake fraud across the financial system.
What the Preventing Deep Fake Scams Act does
Financial regulation in the United States is divided across agencies with overlapping but distinct mandates. That fragmentation matters here. A bank dealing with deepfake identity documents during account opening faces a different regulatory lens than a credit union whose members are being targeted by voice cloning scams, even though both problems stem from the same underlying technology. Without a common framework, each agency interprets the threat through its own supervisory perspective, and the institutions they regulate get fragmented guidance as a result.
The Preventing Deep Fake Scams Act targets that structural problem directly. The Task Force would produce standardized definitions for terms like “generative AI” and “deep fakes” as they apply to financial services, catalog how institutions are currently protecting themselves, identify risks to consumer data and identity, and recommend both regulatory and legislative responses. The bill also opens a path toward nonbinding guidance on deepfake-aware authentication and inter-agency data sharing, recommendations that could eventually inform updates to examination manuals and supervisory expectations.
How fast deepfake fraud is growing
The task force has a one-year study horizon. The threat is not waiting.
To understand why regulators are moving faster than legislation, it helps to look at the trajectory. Deepfake fraud isn’t growing steadily. It’s inflecting. Financial losses from deepfake-enabled fraud exceeded $200 million in the first quarter of 2025 alone, a figure that surpassed the combined losses from every year between 2019 and 2023. The Deloitte Center for Financial Services projects this will carry generative AI-driven fraud losses to $40 billion by 2027.
What’s driving the inflection is a collapse in technical barriers. The tools that produce convincing fakes used to require expertise and resources. They no longer do. Voice cloning now requires as little as three seconds of audio to produce an 85% voice match. That means the pool of potential attackers is expanding while the per-attack cost approaches zero.
The Arup case from early 2024 illustrates where this leads. A finance worker was deceived into transferring $25 million through a deepfake video conference impersonating company executives. The case made headlines because of the dollar amount. What received less attention is that similar attacks, smaller in scale but identical in method, had already become routine enough for FinCEN to build a reporting category around them
What FinCEN's deepfake alert revealed
The FinCEN alert is worth examining because it shows what regulatory response looks like when it emerges from direct contact with the problem. The alert didn’t originate from a task force or a legislative mandate. It came from pattern recognition: FinCEN analysts noticed that suspicious activity reports from financial institutions were increasingly describing the same thing. Criminals were using GenAI-generated documents and media to circumvent identity verification controls.
The red flags FinCEN identified were granular and operational: photos inconsistent with a customer’s stated age, geographic or device data that didn’t match identity documents, newly opened accounts with sudden patterns of rapid transactions. This level of specificity reflects guidance shaped by institutions encountering the problem daily. New York’s Department of Financial Services followed a similar path in October 2024, issuing its own industry letter on GenAI cybersecurity risks and detailing what it expected from supervised institutions under existing regulations.
What these agencies share is direct supervisory contact with regulated financial institutions. That creates a feedback loop: institutions report suspicious activity, regulators interpret it, and guidance follows. The existing legal frameworks gave these agencies enough authority to act without waiting for new deepfake legislation. What those frameworks don’t provide is coordination across agencies, which is exactly what the Preventing Deep Fake Scams Act is designed to solve.
What banks need before deepfake legislation arrives
For CISOs and fraud leaders at banks and credit unions, the practical reality is that the task force’s report is at least a year away. Any regulatory changes it recommends will take longer still to implement. Deepfake fraud, meanwhile, is an operational problem today.
FinCEN’s alert provides a useful starting framework, but one oriented toward recognizing deepfake fraud after an attempt has been made. The red flags it describes help institutions catch suspicious activity during account review or transaction monitoring. What they don’t address is the upstream infrastructure that makes those attempts possible: the executive impersonation campaigns assembled on social platforms, the brand assets cloned into convincing phishing sites, the synthetic identities pressure-tested in lower-value transactions before being deployed against larger targets.
The institutions adapting most effectively have recognized that deepfake fraud is less a novel threat category than an acceleration of impersonation patterns that already existed. The underlying vectors are familiar to any security team that has dealt with brand abuse: cloned websites, spoofed communications, fraudulent identity documents, social engineering that exploits trust in recognized brands. What has changed is the speed, fidelity, and cost at which attackers can produce convincing imitations.
Defending against that shift requires treating the window between when an impersonation campaign is assembled and when it reaches its target as the critical interval. That is what disinformation security frameworks have been advocating, and it is what financial institutions need to operationalize now, before the legislation catches up.
The Bottom Line
The Preventing Deep Fake Scams Act addresses a genuine structural problem in how the United States regulates AI-driven fraud. The fragmentation it targets is real, and the task force model is a reasonable mechanism for building the shared understanding that financial regulators currently lack.
The context surrounding the bill is equally real. Deepfake fraud is producing hundreds of millions of dollars in losses per quarter, and the regulators closest to the problem have already begun issuing operational guidance because the threat data compelled them to act. For financial institutions, the task force’s eventual findings will be valuable. What those institutions need in the interim is the ability to defend against a threat that is already operational, at a pace the legislation was not designed to match.
Key Takeaways
The Preventing Deep Fake Scams Act (originally H.R. 5808, reintroduced as H.R. 1734 and S. 2117) establishes a Task Force on Artificial Intelligence in the Financial Services Sector. The task force brings together seven federal agencies including Treasury, the Federal Reserve, OCC, FDIC, CFPB, NCUA, and FinCEN to study how AI benefits and endangers banks, credit unions, and their customers, with a one-year reporting deadline.
Financial regulation in the U.S. is fragmented across multiple agencies, and there is no unified framework for addressing AI-generated fraud. A coordinated task force can align regulatory perspectives, harmonize guidance, and create a shared fact base. However, the study timeline means actionable requirements are still years away from implementation.
Deepfake-enabled fraud losses exceeded $200 million in the first quarter of 2025, surpassing the combined total from 2019 through 2023. Deloitte projects that generative AI-driven fraud losses will reach $40 billion by 2027. Deepfake fraud attempts in North America surged 1,740% between 2022 and 2023, and voice cloning now requires as little as three seconds of audio.
In November 2024, FinCEN issued a formal alert (FIN-2024-Alert004) identifying deepfake fraud schemes targeting financial institutions. The alert described specific typologies, provided red flag indicators, and created a dedicated SAR filing keyword. It was based on an observed increase in suspicious activity reports describing the use of deepfake media to circumvent identity verification.
Financial institutions should treat deepfake fraud as an acceleration of existing impersonation threats rather than a novel category. Effective defense requires monitoring external channels where impersonation campaigns originate, detecting cloned brand assets and synthetic identities upstream of customer contact, and measuring success in exposure time rather than incident count.



