Featured in Hilal English (The Armed Forced Magzine of Pakistan by ISPR), November 2025

Human faces, voices and lives are now easy to fake. That’s a power that can be used for art and for harm. Here’s a clear, practical guide to the threat, with case studies, expert analysis and steps governments, platforms and organisations should take now.

Today’s AI can create convincing fake photos, voices and videos in minutes often with free or low-cost tools. On mainstream services the makers try to enforce rules; on the dark web, the same technology is sold without ethics, enabling scams, revenge porn and political manipulation.

How deep fakes are made

Two families of AI do heavy lifting. Older systems used neural networks (GANs) to swap faces and map expressions. newer models include diffusion and large generative models can create or edit images and audio from simple text prompts. That means you no longer need a studio or many training photos: a short clip, a few voice samples or a good prompt can produce believable output. The technology’s quality and ease keep improving and the cost is dropping fast.

Why these matters

  1. Financial fraud and corporate loss

Voice and video deepfakes have been used in high-value scams. One widely reported case involved a UK company tricked into a large transfer after a caller impersonated a parent company executive; similar schemes later targeted global firms and produced multimillion dollar losses. Criminals combine voice cloning, fake video calls and social engineering to push victims into urgent decisions.

  1. Political disinformation and social unrest

Deepfakes can pretend leaders say things they didn’t and in moments of crisis those fakes can spread panic or doubt. In 2022, an AI-manipulated clip purported to show Ukraine’s president ordering a surrender. It was debunked but the impulse is clear: in wartime or elections, fakes can amplify confusion and erode trust.

  1. Sexual abuse, revenge porn and child sexual imagery

Non-consensual sexual deepfakes are now an industry. Hundreds of thousands of deepfake porn clips have been found on dedicated sites and law-enforcement groups and child-protection NGOs report a rising tide of AI-generated CSAM on hidden forums. The psychological and reputational damage to victims is profound and long-lasting.

  1. Erosion of trust and journalism’s crisis

When any video can be questioned, ordinary eyewitness evidence loses strength. Journalists, courts and citizens all face a harder task verifying what’s real and bad actors exploit that doubt. Platforms that host content struggle to detect and remove abuse at scale.

Case studies

CEO voice scam (2019 and repeats since): A fraudster used a cloned voice of a parent-company executive to pressure a UK firm into wiring funds. The method combined voice cloning, urgent tone and spoofed caller IDs. Experts warn variants now use video as well.

Arup engineering fraud (2024): A video call impersonation (using AI-generated likeness/voice) tricked an employee into transferring HK$200m (≈£20m) to fraudsters’ accounts in Hong Kong. The incident shows how visual and audio fakes can be fused for high stakes attacks.

Political deepfake attempts (2022): Media watchdogs debunked videos claiming Ukrainian leadership capitulation; the clips were shared by state and proxy networks to sow doubt. Debunking worked but only after some spread.

Deepfake pornography: Studies and investigations recorded hundreds of thousands of non-consensual deepfake porn videos across dozens of sites; platforms and law enforcement are struggling to keep up.

Public tools vs Dark Web

Mainstream AI services (and app stores) generally impose use policies and moderation. They may ban creating porn of real people, political impersonation or voice cloning without consent. But the same or similar models, altered or repackaged are distributed via open-source repos, mirror sites and dark-web marketplaces often with safeguards removed. Cybercriminal forums also offer “deepfake as a service” and bespoke creation for hire. That split polished, policy-constrained tools in public. Unbridled tools on the dark web is what magnifies the risk.

Why is detection and removal hard

  • Models improve fast. Fakes look more realistic each year.
  • Scale is huge. Millions of clips and images are created daily.
  • Metadata is fragile. Uploads strip EXIF or provenance tags and copies can evade blocklists.
  • Open tools and mirrors. Even if a platform bans a model, copies and forks reappear elsewhere.
  • Adversarial cat-and-mouse. Deepfakes adapt to evade detectors; detectors adapt in turn.

Role of governments, platforms and policymakers

No single actor can fix this alone. Below are practical roles and policy levers.

Governments & lawmakers

  • Update criminal law to cover non-consensual deepfake sexual imagery, impersonation for fraud and identity theft, with penalties calibrated to harm. Several jurisdictions and subnational laws already target deepfake porn and election-related manipulation.
  • Mandate transparency and labelling. Laws can require clear, machine-readable labels for AI-generated media used in political advertising and news-adjacent contexts. Spain and the EU’s broader AI rules are moving in this direction.
  • Fund verification and victim support. Public money should help fact-checking bodies, digital forensics teams and counselling/legal aid for victims of deepfake abuse.

You can Also Read: Artificial Intelligence in Cyber Security

Platforms and tech industry

  • Adopt provenance standards. Support and implement open provenance tools (Content Credentials / C2PA) so creators can sign and trace origin. That makes tampering and unauthorized edits easier to spot at scale.
  • Embed detection in pipelines. Platforms should screen uploads automatically, prioritise takedown for non-consensual content and share indicators with industry peers. GitHub/Wired investigations show enforcement is hard but necessary.
  • Rate-limit or restrict high-risk APIs. Provide stricter access controls for functionality that facilitates impersonation.

Law enforcement & international co-operation

  • Treat deepfake fraud as serious economic crime. Encourage cross-border investigations because services and servers often span countries. The dark-web origin of many tools means international coordination is essential.

Civil society, media and the public

  • Media must label and verify. Newsrooms should use cryptographic provenance and clear disclosure when using synthetic content.
  • Education & digital hygiene. Teach organisations and citizens the red flags of fake calls and videos: urgent unverified transfer requests, unexpected video calls, inconsistencies in lighting/voice timbre and channel verification.
  • Support victims. Fast processes for takedown, legal recourse and mental-health support are crucial — especially for non-consensual sexual deepfakes and child-targeted material.

Technology options that help

Provenance & watermarking (C2PA / Content Credentials): If creators sign media at source, platforms and viewers can check authenticity later even if metadata is stripped from copies in transit. It’s not a silver bullet but it raises the cost for attackers and helps victims prove fakes were made.

  • Runtime detection & triage: Combine automated detectors with human review; prioritise likely harmful content and use runtime signals (where it’s being posted, spread patterns) to escalate.
  • Identity verification for high-risk uses: For certain sectors (banking approvals, executive-level transfers), require multi-factor, out-of-band confirmations rather than trusting a single call or video. This simple control defeats most social-engineering attacks.
  • Watermarks & cryptographic binding: New “soft binding” watermarks survive many edits and help recover provenance when metadata is lost — useful for platforms and archives.

Recommendations for organizations

  1. Train staff to be sceptical of urgent video/voice requests; validate via separate channels.
  2. Add provenance checks to your newsroom or legal workflows; prefer signed assets when possible.
  3. Harden financial processes: add step-up auth for transfers, require written confirmations from known secure addresses.
  4. Prepare an incident playbook that includes legal takedown paths, PR responses and victim support.
  5. Monitor the Dark Web for mentions of your executives and brand to detect targeted deepfake campaigns early.

Freedom of expression vs harm

Some deepfakes are art or satire; others are criminals. Policy must thread that needle. Transparency requirements (clear labels, provenance) protect both expression and safety: creators can still make parodies if they disclose them, while abusive content is easier to police and penalise. The debate is active and lawmakers across Europe and some U.S. states and cities, are already moving toward labelling and liability rules.

You Can Also Read: Risks of Uploading Your Photos to AI Models

Final word

Are our banks and boards safe from single-vector voice or video scams?

  • Do our laws give victims speedy takedown and remedies?
  • Are platforms required to sign and check the provenance of political and newsworthy media?
    If the answer to any of these is “not yet”, then we have work to do right now. The tech that can create a fake human face also gives us a chance to build stronger verification systems. We must act before trust becomes the most expensive commodity of all.