Don‘t Let Deepfakes Dupe You: The 7 Sneakiest Online Scams Exposed

Imagine getting a call from your "daughter" sobbing that she needs money to get out of danger, only to find out later that her voice was faked. Or seeing a viral video of a politician saying something so outrageous it sparks protests—but the footage is completely fabricated.

These nightmarish tricks used to seem far-fetched. Yet now, thanks to AI-powered deepfake technology, fakes are getting incredibly realistic, making all of us vulnerable to fraud in ways we never expected.

As an online privacy expert with over 10 years securing systems from cyber attacks, I want to lift the curtain on the 7 most dangerous types of deepfake scams spreading today so you can protect yourself.

What are Deepfakes and How Did They Get So Deceptive?

The term “deepfake” combines the words “deep learning” and “fake.” Deep learning is a type of artificial intelligence that uses neural networks modeled after the human brain to analyze data.

Programmers train these AI systems by feeding them loads of images, videos, voice recordings and text samples related to a person. The algorithms study all those inputs and create an extremely detailed replica that can realistically mimic that person‘s face, voice and expressions.

Then the programmers can make the simulated person appear to do or say things the real person never actually did. The results feel creepily authentic, even fooling people who know the impersonated person well.

“With enough data, machine learning algorithms can now synthesize images and audio nearly perfectly," explains AI expert Alex Champandard.

Deepfake technology keeps advancing thanks to two key drivers:

  1. Availability of Data: Billions of our photos, videos and voice clips are available publicly online or have been leaked in data breaches. This gives fraudsters ample raw material to steal our identities.

  2. AI Compute Power: Deep learning models require intense graphic processing performance from chips and cloud servers costing thousands of dollars. But now this hardware is affordable enough for average users.

Together these trends make deepfakes an ideal weapon for scammers. Let‘s examine the 7 most dangerous varieties you need to watch for.

1. Fake Videos Impersonating People

Realistic-looking deepfake videos impersonating celebrities and politicians are already stirring chaos. But even everyday people can now be digitally puppeteered.

For example, what if a fraudster created a video appearing to show you cheating, breaking laws or saying something offensive? They could threaten to send the footage to your employer or loved ones unless you pay a ransom.

"Individuals are being digitally puppeteered without consent to create fake video content for purposes ranging from humiliation to fraud," warns Henry Ajder, expert in synthetic media threats.

As the examples pile up, experts warn it‘s only a matter of time before forged videos destabilize financial markets, national security or the legal system.

  • Up to 96% of people can‘t distinguish synthetic video from real content according to research by Facebook AI. This will only get worse as quality improves.

  • 65% of business leaders fear deepfakes could be used to compromise their operations according to Ponemon Research.

"Seeing is no longer believing when it comes to online video. We have entered a period of uncertainty 
regarding digital content," warns Dr. Hany Farid, digital forensics expert at UC Berkeley.  

2. Fake News and Disinformation

The 2016 U.S. Presidential elections shone a spotlight on how influential fake news can be. Now deepfakes are poised to supercharge disinformation campaigns.

Forged videos and images spreading propaganda, hate and conspiracy theories could bring catastrophic outcomes before fact checkers can disprove them. And evidence shows botnets and troll farms are gearing up to blitz social platforms with deepfakes that drive division.

“The most realistic deepfakes are using more data, deeper neural networks and new algorithms such as GANs to nearly perfectly simulate human imaging and vocal systems,” says data scientist Sam Diaz.

Some key risks analysts foresee:

  • Fake videos sparking violence at political rallies or protests before authorities can intervene.
  • Forged footage manipulating stock prices if released through mainstream media.
  • Deepfaked ransom letters or recordings from hacked accounts demanding cryptocurrency.

In one alarming example, a deepfaked video of Gabon‘s president seeming to admit election defeat nearly sparked a constitutional crisis in the African nation in 2019.

“The most realistic deepfakes integrate not only speech Mimicry but also gestures and expressions making them very difficult to detect before damage is done,” explains disinformation researcher Aviv Ovadya.

| % of Digital Info Estimated Fake by 2023 | 14% |
| % of People Who Believe Deepfakes | 83% |
| Projected Global Deepfake Market Size by 2027 | $13+ Billion |

Statistics from Social Catfish and Reports and Data

Clearly this is just the beginning of an avalanche of deception…

3. Fake Job Interviews

Imagine getting a call offering your dream job at an amazing salary. To start right away, they just need you to pay $500 for equipment and training. Or to wire $1000 to run a quick background check.

Too good to be true? Definitely. But now these employment scams use AI to perfectly impersonate hiring managers from Fortune 500 companies and elite firms. The believable fake interviews dupe applicants into paying upfront “fees” they never recover.

“Deep voice cloning allows overseas fraudsters to deceive job seekers with great authenticity at scale,” explains cybersecurity reporter Lindsey O’Donnell.

Losses to these remote work cons exceeded $605 million in 2021 according to the Federal Trade Commission. And the CEO of Social Catfish warns they are fielding 20x more complaints of convincing fake interviews scamming victims thanks to synthetic voice AI.

4. Fake Calls Demanding Ransoms

“Mom! Help me, please! I’m trapped…” Imagine getting a desperate call like that from your daughter or husband, then violent threats in the background demanding money. Synthetic voice fraud can now clone family members pleading for help so realistically that even blood relatives get fooled.

Criminals only need short voice samples to fabricate custom vishing attacks aimed at tricking victims under distress into immediate money transfers. Because the AI replications utilize unique vocal tics and emotional cues from relatives, victims rarely pause to verify the caller‘s identity.

Losses to these ransom extortions exceeded $350 million globally as of 2021, a figure expected to double annually.

“Today’s voice cloning tools allow teens to create compelling social engineering attacks from home that most people cannot distinguish from real victims,” explains cybersecurity reporter Lindsey O’Donnell.

Fraudsters often demand untraceable payments like gift cards or cryptocurrency from panicking victims too overwhelmed to ask many questions.

Top executives have also become targets ofdeepfake kidnapping and extortion attempts. Criminals fake the voice of the CEO or CFO demanding sensitive documents or bank transfers from subordinates. This corporate voice fraud siphoned over $2 billion globally from businesses in 2021 according to FBI reports.

5. Fake Honey Traps

Lonely people searching for love often let their guard down online. This makes them prime targets for deepfake honey traps—faked romances crafted to manipulate victims.

AI tools now allow scammers to generate endless attractive fake personas along with automated chat conversations. After building emotional intimacy, they convince victims to transfer money abroad or share compromising photos/videos then blackmail them by threatening exposure.

These custom-tailored psychological schemes integrate identity details harvested from dating profiles and social media to make the illusions more believable. 85% of dating app users say they’ve been solicited into fake relationships according to 2020 FBI statistics. One app maker even discovered 96% of messages sent to members appeared generated by bots or scammers.

“With synthetic media, the scales of deception online tip drastically in favor of the scammers,” explains Henry Ajder, expert in synthetic media threats.

In addition to extorting money, fake honey traps also aim to steal corporate information from executives or government officials by using flirtation as a Trojan horse.

6. Fake Customer Support

That “helpful” service rep on the phone might sound legit, but how do you know for sure it’s your credit card firm or software vendor and not a scammer? AI tools can now clone brands’ IVR systems and any employee’s voice convincingly.

These support impersonators trick users into installing malware or sharing passwords/financial details. Losses to these business email and phone compromises breached $2 billion globally just in 2021 according to the FBI.

28% of firms surveyed had already experienced synthetic voice social engineering attacks costing over $50,000 each. The quality of AI voice clones even has experts unsettled.

“We need to brace for entirely synthetic audio attacks compromising enterprise security just as effectively as their human-driven counterparts," warns Dr. Houman Homayoun, senior scientist at cybersecurity firm SentiLink.

7. Fake Reviews and Testimonials

Research shows over 80% of consumers now read online reviews before buying products. And 92% say positive reviews make them more likely to purchase.

This motivates fraudulent reviews…lots of them. Up to 30% of online evaluations appear suspicious or totally fabricated according to the Better Business Bureau.

Deepfakes take fake testimonials to the next level—video and images of “satisfied customers” cheerfully endorsing products they never actually bought or used. The fakes seem authentic thanks to accurately lip-synced speech and natural facial expressions.

DIY deepfake apps even let amateur scammers without much technical expertise generate bogus celebrity endorsements. They profit from affiliate links every time their fake praise dupes someone into buying.

For example, fraudsters created a deepfaked video of Tom Hanks lauding a cryptocurrency scam that swindled investors out of $100K. With superior processing power, they could have mimicked almost any famous person.

“Near-perfect emulation of facial movements and spoken intonations place deepfakes beyond any technology we’ve seen for falsifying video,” warns Dr. Hao Li, expert in computer vision and graphics at USC.

Fake Review Statistics Details
% of Reviews Faked Up to 30%
Annual Losses to Online Review Fraud $152+ Billion
% Who Won‘t Buy After Seeing Negative Reviews 92%

Statistics from Finances Online and other analysis

As you see, deepfakes introduce unbelievable new risks to ecommerce.

Which Deepfake Scams Pose the Biggest Threats?

I evaluated the 7 categories of deepfake fraud based on two factors—estimated financial losses and probability of tricking people. Here’s how I rank which seem most dangerous currently:

  1. Fake Customer Support
  2. Voice Fraud
  3. Fake Job Interviews
  4. Video Impersonations
  5. Fake Relationships
  6. Fake Reviews
  7. Fake News

However, over the next 5 years, I foresee exponential growth in volumes and sophistication around synthetically faked videos, news, and social media engagement. So while financial costs of those scams seem lower now, their potential to cause catastrophic outcomes stands horrifyingly high…

How Can You Protect Yourself from Deepfake Scams?

I wish I could tell you scientists have a definitive solution to stop deepfakes. But the reality is this technology has matured faster than defenses against it. Nevertheless, here are a few precautions I advise everyone take:

🔴 Thoroughly verify identities and situations in voice/video calls before sending any money or sensitive data to strangers who contact you unexpectedly.

🔴 Analyze online profiles carefully before connecting—fake personas often use vague details and limited photos.

🔴 Compare information, writing style and recommendations across multiple sites before trusting reviews.

🔴 Keep device security protections like antivirus software and firewalls updated.

🔴 Only enter confidential login credentials on secured websites using HTTPS addresses.

Essentially we all must stay vigilant of anything suspicious and rely less on assuming videos/audio/images we see online represent reality.

Because one thing about deepfakes remains undisputed among experts—this crisis will get worse before it gets better…

Could You Be at Risk of Deepfake Scams?

By now you’re probably wondering if you could fall victim to these insidious frauds. Rest assured you’re not alone. Anyone could be duped depending on the circumstances.

Supposed your boss sent an email that appeared to come from the right address asking you to buy $500 in gift cards for a client…would you comply or get suspicious?

What if you got a call with your daughter‘s voice urgently begging for help—would you wire money no questions asked?

When faced with such tense situations requiring split-second decisions, even smart people can get manipulated without realizing it.

So don‘t feel bad or embarrassed if you ever do get tricked. These criminals exploit our bonds to family and loyalty to employers specifically because they know how hard it is for good people to say no.

The best we can do is inform each other about the latest schemes so we can be prepared. Knowledge truly gives us power against these predators.

Please share this article to help friends and colleagues protect themselves as deepfakes proliferate!

Tags: