Back
cybersecurity

Phishing Emails Used to Be Easy to Spot. AI Changed That.

82% of phishing emails now use AI. They have perfect grammar, know your name, and reference real details about your life. The old advice — look for typos — no longer works. Here's what does.

There used to be a reliable way to spot a phishing email. Bad grammar. Misspelled words. A Nigerian prince. A logo that looked slightly off. Security trainers built entire curricula around these tells, and for years, they worked.

That era is over.

82.6% of phishing emails detected between September 2024 and February 2025 used AI — a 53.5% year-on-year increase. The emails no longer contain spelling mistakes. They reference real details. They match the writing style of the person or company they're impersonating. And AI-generated phishing emails achieve click-through rates more than four times higher than their human-crafted counterparts.

The problem is not that people are careless. The problem is that the tools for detecting fake emails were built for a threat that no longer exists.


What AI Actually Changed

Traditional phishing worked on volume. Send ten million poorly written emails, hope that a fraction of a percent of recipients click. The emails were cheap to produce, obviously fake to anyone paying attention, and caught by most spam filters.

AI flipped this model.

While a human attacker might spend 30 minutes crafting a single spear-phishing email, AI tools generate hundreds of contextually unique variations in the same timeframe. Each email can be personalized to the recipient — referencing their name, their employer, their recent activity, or their role — without any additional effort from the attacker.

The results are measurable. In a 2024 benchmark study by Brightside AI, AI-crafted phishing emails achieved 54% click rates, compared to just 12% for human-written ones. That is not a marginal improvement. It is a fundamental shift in how effective these attacks are.

The grammar and spelling tells are gone. Modern language models can replicate a company's style of communication, making impersonation attacks significantly harder to detect by appearance alone.


The New Attacks You Haven't Heard Of

Beyond better email writing, AI has enabled attack types that didn't meaningfully exist a few years ago.

Voice cloning — Attackers use AI to clone the voice of someone the target knows — a manager, a colleague, a family member — and call them with instructions. Voice cloning has crossed the "indistinguishable threshold," meaning human listeners can no longer reliably distinguish cloned voices from authentic ones. An employee receiving a call that sounds exactly like their manager asking them to urgently reset a password or approve a transfer has no technical way to verify it's fake.

Deepfake video calls — The same principle applied to video. A single deepfake video call cost engineering firm Arup $25.6 million. Employees on a video call with what appeared to be real colleagues approved a fraudulent transaction. The colleagues were AI-generated in real time.

Hyper-personalized spear phishing — AI enables targeted attacks that reference specific organizational details. One documented campaign targeted 800 accounting firms with AI-generated emails referencing specific state registration details, achieving a 27% click rate — far above the industry average.

QR code phishing — Nearly one in four campaigns used QR codes or malicious links disguised as MFA prompts. A QR code in an email bypasses most link-scanning tools because the malicious URL is embedded in an image, not text.

Workflow impersonationResearchers identified 29,183 unique phishing domains using e-signature and document approval-themed lures. The attack looks like a routine document requiring a signature — the kind of email that arrives dozens of times a day in most workplaces.


Why the Old Advice Doesn't Work Anymore

"Look for typos" — obsolete. AI writes better than most humans.

"Check if the sender looks suspicious" — insufficient. Display names are trivially spoofed and look identical to legitimate senders in most email clients.

"Hover over the link to check the URL" — increasingly unreliable. URL redirection was used in 48% of phishing links, up from 39% a year earlier. The URL you see when hovering may be a legitimate redirect service masking the final malicious destination.

"If it has the company logo, it's probably real" — wrong. AI tools create hundreds of fraudulent websites using the logo, presentation style, and colors of real brands, with user experiences often indistinguishable from legitimate ones.

The old mental checklist was calibrated for a specific type of attack. That attack has been replaced by something that looks nothing like it.


What Actually Works Now

The shift required is from asking "does this look fake?" to asking "should I be doing this at all?"

Verify requests through a separate channel. If you receive an email asking you to do something — approve a payment, reset a password, share credentials, click a link — verify the request through a different communication channel before acting. Call the person on a known number. Send a separate message. Don't reply to the email itself or use contact information provided in it.

This single habit defeats the majority of AI phishing attacks because they rely on you acting within the communication channel they control. A phone call to a known number breaks that chain entirely.

Be skeptical of urgency. Attackers study timing and behavioral patterns to craft messages that trigger responses. Legitimate requests — from your bank, your employer, your colleagues — can almost always wait for verification. If a message demands immediate action and creates a sense of panic, that is a signal to slow down, not speed up.

Use phishing-resistant authentication. Standard two-factor authentication using SMS codes or authenticator apps protects against password theft but not against real-time phishing attacks that capture your code as you enter it. FIDO2 hardware keys are the only effective protection against these attacks. Unlike SMS or TOTP codes, FIDO2 keys are domain-bound — they refuse to authenticate on a proxy site that spoofs the legitimate domain.

Check the actual domain, not the display name. In your email client, click on the sender's name to expand and see the actual email address. Look specifically at the domain — the part after the @ symbol. A display name can say anything. The domain is harder to fake convincingly, though lookalike characters make even this imperfect.

Be especially careful with QR codes in emails. A QR code in an email from an unknown sender or unexpected source should be treated with the same skepticism as a suspicious link. QR codes are harder to preview than URLs and increasingly used precisely because most people don't think to question them.


The Scale of the Problem

3.4 billion phishing emails are sent every day. 91% of cyberattacks start with an email.

Phishing remains one of the most devastatingly expensive breach vectors. According to IBM's Cost of a Data Breach Report 2025, the global average cost of a data breach reached $4.44 million. Verizon's 2025 Data Breach Investigations Report found that approximately 60% of breaches involved a human element — heavily driven by social engineering and phishing.

These numbers describe a threat that is getting worse, not better, as AI tools become cheaper and more accessible. AI-enabled fraud surged 1,210% in 2025, with projected losses reaching $40 billion by 2027.


The Bottom Line

AI did not invent phishing. It industrialized it. The volume is higher, the targeting is more precise, the emails are more convincing, and the delivery channels have expanded beyond email to voice calls, video calls, and messaging apps.

The old defense — learn to spot the tells — was always a patch on a systemic problem. It worked when the tells were obvious. They no longer are.

What works now is behavioral: verify unexpected requests through a separate channel, be skeptical of urgency, use strong authentication on accounts that matter, and treat QR codes in emails with the same caution as suspicious links.

The emails look real. The voices sound real. The faces on video calls may not be real. The question to ask is not whether a communication looks legitimate — it's whether you should act on it at all before independently verifying who sent it.