Gen AI opens a new frontier in cybercrime. Learn how cybercriminals leverage generative AI, machine learning, and large language models (LLMs) to maximize the impact of their fraud activities and how your organization can combat this threat.
What is Generative AI Fraud?
Generative AI is the use of large language models, like ChatGPT, Claude, Microsoft Copilot, and Google Gemini to create the materials for a fraud scheme. From deepfakes to phishing messages, generative AI supercharges the capabilities of cybercriminals. In an instant, they can generate sophisticated, realistic fraud content for use in their schemes and automate fraud campaigns at scale. Old tactics are easier to execute, and the technology offers new opportunities for threat actors to reach their targets.
Michael S. Barr
Vice Chair for Supervision of the Federal Reserve
“Cybercrime is on the rise, and cybercriminals are increasingly turning to Gen AI to facilitate their crimes.”
Fraudsters Can Now Easily
Impersonate Executives
Forge Synthetic Identities
Write Phishing Content
Fabricate ID Documents
Types of Generative AI Fraud
Fraud automation at scale
Generative AI enables fraudsters to automate steps in the creation and execution of fraud schemes. For example, AI can generate scripts and code to breach accounts and steal personally identifiable information. Credential stuffing and brute force attacks are simpler to deploy and more successful.
Text content generation
In seconds, generative AI can create typo-free text content. Cybercriminals use it to write phishing emails, spoof websites, and create other fraudulent materials. AI can even mimic the creative style of an organization, enabling online brand impersonation attacks.
Image and video manipulation
With access to vast databases and deep learning models, fraudsters can convincingly fabricate video scenes with near-perfect realism. Criminals use this capability to manipulate existing images, replacing original content. It can also create new video or image content for use in social engineering schemes.
Human voice generation
Voice phishing (Vishing) is made much more effective with generative AI. Cybercriminals can impersonate legitimate users to gain unauthorized access to their accounts, or they can create highly realistic messages, encouraging their targets to take unsafe actions, including downloading malware. The manipulated audio will sound nearly perfect, even replicating people the target knows personally.
Fake ID documents
The image generation abilities enable fraudsters to create convincing fake identity documents. These fake IDs can beare used to open new accounts, apply for loans, and breach existing accounts. A sophisticated ID verification tool is important to identify and thwart these attacks.
Deepfake selfies
ID verification tools and facial recognition systems can be deceived by generative AI. Cybercriminals are able to manipulate an existing image of a face to access an account or generate a synthetic identity for new account fraud.
Impact of Gen AI Fraud
Generative AI fraud compounds the consequences of fraud by increasing the volume and deceptiveness of attacks. Deloitte’s Center for Financial Services predicts that gen AI may increase fraud losses in the United States from $12.3 billion in 2023 to $40 billion in 2027,. A compounded annual growth rate of 32%.
Fraud already imposes steep costs on financial institutions and other organizations. Banks can lose up to 7.5% of their annual revenue to direct and indirect fraud costs. For credit unions, that number rises as high as 11%.
Don’t underestimate the financial burden of online fraud schemes. Their costs will only rise as AI continues to develop and proliferate.
Fraud losses, actual and expected, 2017 to 2027 ($US billion)
How to Respond: Stop Phishing Before it Leads to ATO
Preventing account takeover starts with disrupting the phishing attacks that set it in motion. While tools like multi-factor authentication help secure access, stopping impersonation campaigns before they reach customers is the most effective first line of defense.
Extend Your Team with an Online Brand Protection Partner
Allure Security helps you stay ahead of impersonation threats by detecting and removing malicious content before it reaches your customers or members. Our AI-powered platform uses computer vision to scan the open web and the dark web for malicious websites, mobile apps, and social media accounts that other solutions often miss. Once threats are identified, our expert takedown team acts quickly to remove them.
Online brand protection services are a vital resource in the fight against phishing and account takeover. Many teams, whether focused on security, fraud prevention, or brand integrity, face more threats than they have time or resources to manage. By partnering with a specialized service provider, you can extend your capabilities and respond faster without the overhead of building and managing a specialized internal team.
Adapt Cybersecurity Training Programs
Cybersecurity and fraud teams must adapt education programs to teach customers, members, and internal staff to exercise greater scrutiny. Without glaring typos to raise suspicions in the reader, training should emphasize other strategies to identify a phishing attack, including:
- Carefully review the sender’s email address to see if it is an appropriate sender.
- Any message demanding an immediate reaction to a crisis should elicit careful review.
- Though not possible on mobile devices, hover cursors over links before clicking; this will generate a preview window showing the link location.
- Confirm the request’s legitimacy through a different channel (i.e., phone, email, etc.).
How to Spot AI Fraud
AI Generated Text
- Unusual email requests: Though AI-generated phishing emails will be better written than in years past, other context clues can tip off the skeptical reader. Look for a sense of urgency, unknown sending domain, or suspicious link destination to identify phishing emails.
- Overly polished or formal language: AI writing often contains detectable traits. The writing may be formal and clear, but this itself is inappropriate in certain contexts. AI also tends to overuse Em dashes (—) and verbose transitional phrases, such as “in conclusion” or “It is important to note that”.
- Paragraph balance: Humans organize paragraphs based on the needs of the message, but AI tends to use a uniform paragraph length. Look at the structure of each paragraph. If it is too closely balanced, this could be an indicator of AI-generated text.
Deep Fake Video and Audio
- Eye movement and blinking: Close inspection of video content will reveal oddities in the eyes of the speaker. They may blink too frequently or infrequently. One eye may blink out of sync or fail to close fully. The speaker’s gaze may seem unfocused or hollow, producing a vacant expression.
- Skin and face: You may notice blurring around the jawline, hairline, or cheeks, especially during head movements. There could also be a shimmering or warping of the face. Another sign of a deepfake is if the speaker’s neck is a different color than the face.
- Mouth and lip syncing: There may be a mismatch between the lip movements of the speaker and the words spoken. This is especially common with longer, more complex words or during rapid speech.
- Filler words and cadence: AI-generated video may overuse “um’s” and “ah’s” in an awkward way. Close your eyes and listen for an inorganic rhythm to the speech.
- Robotic tone: The musicality of authentic language is often diminished in AI-generated content. They may not sound totally robotic, but there is a noticeable distinction from normal human speech.
- Motion inconsistencies: AI-generated videos often struggle with natural depth of field. When the subject moves slightly, the background will remain static. Alternatively, objects may morph, blur, or disappear momentarily during the video.
- Lighting and reflection errors: Look for a mismatch in lighting on the speaker and the background. The lighting may have multiple sources, lay incorrectly, or shift throughout the video.
How to Spot AI Fraud
AI fraud can target anyone in your organization, from the CEO to the interns. The first step to combatting generative AI fraud is to raise awareness among the internal team. Employees should be educated on the techniques fraudsters use with periodic tests to ensure they remain vigilant. IT leaders should be encouraged to spearhead these initiatives.
Network segmentation / Zero Trust
Like with traditional fraud, mitigation efforts should be a priority when a breach occurs. Implementing network micro segmentation or adopting Zero Trust principles can stop the lateral movement of attackers once they gain entry to the network.
Multifactor, biometric, and behavioral authentication
Enhance your authentication challenges with the latest technologies. Multifactor authentication with biometric or behavioral components is among the strongest protections against phishing and account takeover available today. Even if phishing attacks collect login information, biometric and behavioral requirements still pose a significant (but not insurmountable) challenge for AI fraud.
Strong identity verification tools
When opening new bank accounts or enrolling in online services, it is a good practice, where appropriate, to require users to present a government-issued ID. Strong identity verification tools can analyze ID images, identifying synthetic IDs and stopping the user from opening the account.
Utilize AI for online brand protection
Fight fire with fire. Online brand protection solutions from Allure Security are powered by AI and computer vision, enabling us to scan the internet in the same way a skilled analyst would. This helps identify the malicious websites, spoofed social media accounts, and rogue apps that other solutions miss. Find and remove the impersonating content before attacks can target your customers.
Related Articles
-
Diamond Bank Addresses Spoof WebsitesDiamond Bank is a community bank with 14 branches and thousands of customers...
-
Credit Union Supercharges Takedown CampaignsDo-It-Yourself Takedown Struggles A credit union based in the southern United States supports...
-
Fraudsters Steer Clear of ORNL Federal Credit UnionORNL Federal Credit Union manages $4.06 billion in assets, serves over 219,000 members,...
-
SharkBot Trojan Embedded in Mobile Banking ApplicationDuring a recent partner mobile malware scan, Allure Security identified a rogue mobile...
-
How to Remove Spoof Mobile ApplicationsTo remove rogue mobile applications (an unauthorized version of your mobile app) from...
-
Zelle Fraud: How to Protect Your Customers and Brands from ScamsSince its launch, the peer-to-peer payment app Zelle has gained immense popularity. In...





