Scammers are exploiting LinkedIn, Indeed, and other trusted platforms to impersonate employers, harvest credentials, and steal identities. Job seekers lost $501 million in 2024 alone.
For millions of job seekers, LinkedIn and Indeed have become the default starting point for career transitions. The platforms have earned that trust through years of connecting candidates with legitimate opportunities, building an infrastructure where sharing a resume, a phone number, or even banking details for direct deposit feels like a routine part of the hiring process. Scammers have noticed.
What began as isolated scams has grown into one of the fastest-growing fraud categories in the United States. The Federal Trade Commission tracked job scam losses climbing from $90 million in 2020 to more than $501 million in 2024, a fivefold increase in four years. Researchers at Gartner now predict that by 2028, one in four job candidates will be fake: not merely embellishing credentials, but presenting entirely fabricated identities designed to extract money, personal data, or both.
This setup exploits vulnerability at scale. Job seekers are already anxious, already conditioned to share sensitive information, and already primed to trust platforms they’ve used for legitimate purposes. That combination of emotional pressure and familiar trust is precisely what makes social engineering so effective in this context.
How recruitment fraud works
The damage from recruitment fraud flows in two directions, creating victims on both sides of the impersonation. Job seekers lose money (typically around $2,000, though individual cases can reach six figures) and surrender data that enables downstream identity theft and financial fraud. But the companies whose brands are borrowed without permission face their own consequences.
When scammers adopt a company’s logo, job descriptions, and visual identity, they inherit the trust that organization has built over years. Victims who lose money to fake recruiters don’t always distinguish between the scammer and the brand being impersonated. The FBI has documented cases where job seekers wrote negative reviews of companies they believed had defrauded them, not realizing those companies were impersonation victims themselves.
The reputational effects extend directly to talent acquisition. When news spreads that a brand has been exploited in recruitment scams, legitimate postings become suspect and qualified candidates hesitate to apply. Staffing firm Murray Resources received over 100 calls and a negative online review in a single month after scammers impersonated their recruiters via text message. When reputation becomes entangled with scam associations, hiring pipelines suffer in ways that are hard to measure but add up over time.
AI has changed the economics
Generative AI didn’t invent recruitment fraud, but it has changed the math in ways that favor attackers. Creating a convincing fake posting, a cloned career portal, or a professional-sounding interview script once required meaningful time and skill. Now it requires prompts and a few hours of setup.
The same tools that help legitimate recruiters write compelling job descriptions help scammers produce grammatically perfect postings at scale. AI-generated headshots populate fake LinkedIn profiles that can pass casual inspection. Large language models power conversational responses during text-based “interviews” that feel authentic until the victim looks closer. Researchers estimate that creating a complete fake job candidate persona now takes less than a day for someone with no prior experience in image manipulation or social engineering.
The sophistication now extends to real-time video attacks. Cybersecurity firm Pindrop Security documented a case they dubbed “Ivan X,” a candidate who used deepfake software during his video interview to mask his true identity. The recruiter noticed that facial expressions were slightly out of sync with words, and when asked to wave his hand in front of his face, the software couldn’t maintain the illusion. CBS News reported on a separate company that encountered two suspected deepfake interviews in quick succession, forcing them to fundamentally restructure their hiring process. They now fly all candidates in for in-person meetings at company expense. The extra cost, they determined, was cheaper than the alternative.
The platform problem
LinkedIn and Indeed aren’t indifferent to the fraud occurring on their services. The sheer scale of their enforcement efforts (80 million accounts removed, 117 million scam instances blocked) demonstrates genuine investment in the problem. But those numbers also reveal the magnitude of what they’re fighting. If platforms are catching and removing threats at that volume and attackers are still extracting half a billion dollars annually from job seekers, the economics continue to favor the scammers.
Verification features help at the margins. LinkedIn introduced badges indicating verified recruiters and verified job postings, and the company reports that more than half of its listings now carry these indicators. But verification addresses only the postings that originate on the platform itself. Scammers who contact victims directly via email, text message, or encrypted messaging apps bypass platform controls entirely, exploiting the trust that platforms have built without operating within their enforcement reach.
The Wall Street Journal reported that ghost jobs, listings for positions that don’t actually exist or that companies have no intention of filling, now account for 18 to 22 percent of postings on major platforms. While ghost jobs represent a different problem than outright fraud, they erode the trust that makes job boards function and create cover for malicious actors operating alongside legitimate but stale listings.
Stopping recruitment fraud
The challenge for organizations whose brands are being weaponized is that traditional security programs weren’t designed to address threats that operate entirely outside the corporate perimeter. Firewall rules and endpoint protection don’t help when attackers are borrowing your identity rather than breaching your systems. The attack surface is your reputation, and defending it requires capabilities that extend far beyond the infrastructure you control.
Effective protection starts with continuous monitoring across the surfaces where impersonation actually occurs: job boards, professional networks, social media platforms, and the newly registered domains where fake career portals appear. Detection must happen faster than victims click. Our research has shown that half of all victims engage with impersonation content within the first nine hours of an attack going live, which means the window for intervention is measured in hours rather than days.
Speed of response matters as much as speed of detection. A fake posting that remains active for a week will harvest far more personal data than one removed within hours. Managed takedown services with established relationships across platforms can coordinate removals at timescales that manual reporting cannot match. The most sophisticated defensive approaches go further still, using deception technology to poison the data that scammers collect, rendering stolen credentials worthless at resale and degrading the attacker’s return on investment.
Job seekers bear individual responsibility for verifying opportunities before surrendering personal information. But they’re operating against adversaries with professional-grade operations, AI-powered tools, and scalable infrastructure. Until detection and takedown capabilities catch up to the speed of these attacks, the $500 million figure will continue climbing.
Key Takeaways
FTC data shows job scam losses grew fivefold from 2020 to 2024, and Gartner predicts one in four job candidates will be fake by 2028.
LinkedIn removed over 80 million fake accounts in late 2024 and 117 million scam instances in early 2025, yet fraud continues at scale.
Deepfake interviews, AI-generated postings, and cloned career portals now require minimal technical skill to create.
Victims surrender Social Security numbers, banking details, and personal data that fuels downstream financial fraud.
Half of victims engage within nine hours. Detection and takedown velocity directly limits harm.



