AI is making phishing scams more dangerous

  1. Home
  2. AI
  3. AI is making phishing scams more dangerous

Dangerous Phishing Scams

AI chatbots have taken the world by storm in recent months. We’ve been having fun asking ChatGPT questions, trying to find out how much of our jobs it can do, and even getting it to tell us jokes.

But while lots of people have been having fun, cybercriminals have been powering ahead and finding ways to use AI for more sinister purposes.

They’ve found that AI can make their phishing scams harder to detect, making them more successful.

Our advice has always been to be cautious with emails. Please read them carefully. Look out for spelling mistakes and grammatical errors. Make sure it’s the real deal before clicking any links.

And that’s still excellent advice.

But ironically, the phishing emails generated by a chatbot feel more human than ever, which puts you and your people at greater risk of falling for a scam. So we all need to be even more careful.

Crooks are using AI to generate unique variations of the same phishing lure. In addition, they’re using it to eradicate spelling and grammar mistakes and even to create email threads to make the scam more plausible.

Security tools to detect messages written by AI are in development, but they’re still a way off.

That means you need to be extra cautious when opening emails – especially ones you’re not expecting. First, always check the sent from email address on the message. Then, double-check with the sender (not by replying to the email!) if you have even the slightest doubt.

Get in touch if you need further advice or team training about phishing scams.

Published with permission from Your Tech Updates.

Menu