The Training Myth: Why Employees Can’t Outthink AI Phishing

    Subscribe to our newsletter

    By submitting this form, you agree to the Allure Security privacy policy.

    Silhouette of a person facing a glowing computer screen with digital interface overlays, representing AI-driven phishing attacks targeting enterprise users

    New research reveals that employees trained to spot phishing are increasingly helpless against AI-generated attacks—forcing security leaders to rethink their defense strategies.

    There’s a comforting assumption embedded in enterprise security strategy: train your employees well enough, and they’ll become a human firewall against social engineering. It’s the premise behind a $7 billion security awareness training industry, and it’s increasingly disconnected from reality.

    New research paints a troubling picture. AI-generated spear phishing emails now achieve a 54% click-through rate in controlled studies, compared to just 12% for generic phishing attempts. The same AI systems that were 31% less effective than human attackers in 2023 are now 24% more effective, according to ongoing experiments by security firm Hoxhunt. The learning curve hasn’t just flattened—it’s inverted.

    For security leaders, the implications are uncomfortable. The cornerstone of your human risk management program may be addressing yesterday’s threat while today’s attackers operate at machine speed and precision.

    The evidence gap in security training

    The security industry has long promoted a reassuring narrative: phishing simulations reduce click rates, therefore training works. KnowBe4’s 2025 benchmarking report shows click rates dropping from 33% to 4.1% after 12 months of training. At first glance, that’s an 86% improvement.

    But there’s a problem with this logic. Simulation performance doesn’t reliably predict real-world outcomes.

    A rigorous study from UC San Diego Health, presented at the IEEE Symposium on Security and Privacy in May 2025, followed over 19,500 employees through eight months of phishing campaigns. The researchers’ conclusion was stark: cybersecurity training programs “do little to reduce the risk” that employees will fall for phishing scams. More troublingly, they found no significant relationship between how recently users completed training and their failure rate with simulated phishing.

    The findings echo earlier research from ETH Zurich, which found that embedded training (the immediate feedback employees receive after clicking a simulated phish) “can make employees overconfident” both in their abilities and in the fact that mistakes in phishing tests are without repercussions. In some cases, training actually made employees more susceptible to future attacks.

    Why AI changes the training equation

    The traditional red flags that security awareness programs teach employees to recognize have been eliminated by generative AI: misspellings, awkward grammar, suspicious sender addresses. What remains are attacks that are grammatically flawless, contextually appropriate, and devastatingly personalized.

    Consider the economics. IBM security researchers found that AI can generate an effective phishing campaign with just five prompts in five minutes, work that would take human experts 16 hours. The cost to launch an AI-assisted spear phishing attack has dropped to roughly $50 per week. At that price point, attackers can afford to research targets thoroughly, reference real projects and colleagues, and craft messages indistinguishable from legitimate business correspondence.

    The Scattered Spider hacking group demonstrated this capability at industrial scale. Using information scraped from LinkedIn, they identified employees at MGM Resorts, assumed their identities, and called the IT help desk. The entire social engineering attack took ten minutes. The resulting ransomware incident cost MGM $100 million and disrupted operations for over a week. Caesars Entertainment, hit by the same group using similar tactics, paid $15 million in ransom.

    These weren’t unsophisticated employees. MGM’s staff had presumably completed security awareness training. They were simply outmatched by attackers who knew exactly which psychological buttons to push.

    The healthcare sector's painful lesson

    No industry illustrates the limits of human-centric defense more clearly than healthcare. The sector faces the highest average breach costs of any industry at $10.93 million per incident. It’s also the most frequently targeted by phishing, and has the highest employee susceptibility rates in simulation testing.

    The February 2024 attack on Change Healthcare became the largest healthcare data breach in history, affecting 192.7 million Americans. Change Healthcare, a subsidiary of UnitedHealth Group, processes approximately 15 billion healthcare transactions annually and serves as critical infrastructure for claims processing, eligibility verification, and pharmacy benefits across the U.S. healthcare system. The entry point? Compromised credentials on a remote access portal that lacked multi-factor authentication. A ransomware group spent nine days moving through the network before deploying their payload. The total cost has exceeded $2.4 billion.

    Five months later, Ascension Health suffered a Black Basta ransomware attack that disabled electronic health records across 142 hospitals for nearly four weeks. The initial access vector: an employee who inadvertently downloaded a malicious file. That single click cascaded into a crisis affecting 5.6 million patients.

    The pattern is consistent. Even in organizations with mandatory security training, where employees understand the risks intellectually, attackers find ways through. The problem isn’t awareness; it’s that human judgment can’t reliably detect threats designed specifically to evade it.

    What the data actually tells us

    The most honest assessment of security training comes from researchers who’ve examined it without industry funding. A 2023 review by University of Adelaide scholars analyzed dozens of phishing awareness studies and concluded that “evidence on the success of programs in driving sustained behavioral change is limited.”

    The disconnect between simulation results and real-world outcomes has a simple explanation: attackers don’t use the same templates as training vendors. When phishing simulations test employees on obvious red flags, click rates decline. When real attackers craft personalized messages referencing specific business context, the same employees remain vulnerable.

    Unit 42’s 2025 incident response report found that 36% of all security incidents began with social engineering, and more than one-third of those involved non-phishing techniques like voice calls, fake system prompts, and help desk manipulation. In one case, attackers moved from initial access to domain administrator rights in under 40 minutes using only social pretexts and built-in system tools. No malware was deployed. No technical vulnerability was exploited. Just human persuasion, executed with precision.

    The case for automated defense

    None of this suggests that security training is worthless. It remains a necessary component of organizational security culture. But the evidence increasingly supports what the UC San Diego researchers recommended: “refocus efforts to combat phishing on technical countermeasures” rather than relying on human detection.

    This means investing in systems that don’t depend on employees recognizing threats. Multi-factor authentication (the control that was missing at Change Healthcare) prevents credential theft from becoming system compromise. AI-powered email security can detect behavioral anomalies and contextual manipulation that humans miss. And proactive disinformation security platforms can identify and neutralize attack infrastructure before phishing campaigns launch.

    The Gartner prediction that 50% of enterprises will invest in disinformation security by 2028 reflects this shift. Organizations are recognizing that the attack surface has expanded beyond what training can protect. When AI-generated phishing achieves success rates exceeding 50%, the only sustainable defense is AI-native detection that matches attackers’ speed and sophistication.

    The Bottom Line

    The goal isn’t to abandon training but to right-size expectations about what it can accomplish. Security awareness programs excel at building organizational culture and establishing baseline hygiene. They’re less effective as a primary control against targeted, AI-enhanced attacks.

    A more realistic approach allocates resources proportionally. Continue training for compliance and culture. But invest decisively in automated detection, identity verification, and rapid response capabilities that don’t require employees to outthink attackers in real time.

    The uncomfortable truth is that “trained users” were never a reliable security control—they were a convenient assumption that let organizations defer more expensive investments. As AI-powered social engineering erases the gap between novice attackers and expert manipulators, that assumption has become untenable.

    The only question is whether organizations will adjust their strategies proactively, or wait for the incident that proves the point.

    Disinformation security represents a fundamental shift in how organizations think about external threats. The question isn’t whether attackers will impersonate your brand, executives, or customer service—it’s whether you’ll detect it before significant damage occurs.

    The organizations investing early in AI-native disinformation defense are building capabilities that match the sophistication and speed of automated threats. Those waiting for legacy vendors to catch up may find themselves explaining to boards and customers why their defenses couldn’t keep pace with entirely predictable attacks.

    For security leaders evaluating their readiness, the time to act is before the incident that proves the point.

    Key Takeaways

    Does security awareness training actually work against modern phishing attacks?

    Research shows mixed results. While training reduces click rates on simulated phishing, a UC San Diego study of 19,500 employees found it “did little to reduce the risk” of falling for real attacks. AI-generated phishing now achieves 54% click rates, more than four times higher than generic attempts, making traditional red flags obsolete.

    Why is AI-generated phishing more dangerous than traditional attacks?

    AI eliminates the telltale signs employees are trained to spot: spelling errors, awkward grammar, and generic messaging. AI can generate effective spear phishing campaigns in five minutes for as little as $50, with personalization that references real projects, colleagues, and business context. Hoxhunt research shows AI phishing improved from 31% less effective than humans in 2023 to 24% more effective by March 2025.

    Which industries are most vulnerable to social engineering attacks?

    Healthcare faces the highest breach costs at $10.93 million per incident and the highest employee susceptibility rates. Financial services follows closely, with 64% of institutions reporting business email compromise attacks in 2024. Manufacturing and telecommunications also show elevated risk due to extensive executive data exposure.

    What happened in the MGM Resorts social engineering attack?

    In September 2023, the Scattered Spider hacking group used LinkedIn to identify an MGM employee, then called the IT help desk and impersonated them. The ten-minute phone call gave attackers access to MGM’s systems, ultimately leading to a ransomware attack that cost $100 million and disrupted operations for over a week.

    These lessons cost billions of dollars in collective fraud losses to learn. Organizations that can absorb them without paying that tuition will be better positioned to protect their customers, their brands, and their bottom lines from threats that continue to grow more sophisticated each year.

    For practical examples of how IT leaders at investment firms have applied these principles, including specific attack stories and actionable recommendations, see our companion piece on insights from a recent IT leaders panel discussion.

    What should organizations invest in instead of relying solely on training?

    Security leaders should prioritize technical countermeasures: multi-factor authentication, AI-powered email security that detects behavioral anomalies, identity verification protocols for sensitive requests, and disinformation security platforms that identify attack infrastructure proactively. Training remains valuable for culture-building but shouldn’t be the primary defense against sophisticated social engineering.

    See the threats targeting your brand right now

    Get a customized assessment showing active impersonation, phishing infrastructure, and exposed credentials specific to your organization. No commitment required.