Skip to main content
Cybersecurity

Deepfake Threats in India: How to Protect Yourself

India isn't ready for deepfakes. Not the government, not the platforms, and definitely not the average WhatsApp user. Here's how bad it's gotten and what you can realistically do about a problem nobody's solving fast enough.

VS
Vikram Singh
·14 min read
Share:
Deepfake Threats in India: How to Protect Yourself

India is woefully, embarrassingly unprepared for the deepfake crisis that's already here. Not coming — already here. And the response from the government, from tech platforms, from law enforcement has been so inadequate it would be funny if real people weren't getting destroyed by this technology every single week. The Cyber Peace Foundation ranked India among the top five countries for deepfake incidents in 2025, and that's only counting what got reported. The actual numbers are probably several times worse because most victims either don't know they've been targeted or are too ashamed to come forward.

I'm genuinely frustrated writing this because I've been covering digital threats in India for years and the pattern is always the same: a technology emerges, criminals adopt it immediately, and the regulatory response arrives three to five years later when the damage is already done. That's exactly what's happening with deepfakes, except the damage potential here is orders of magnitude worse than phishing or SIM-swap fraud because deepfakes attack something more fundamental — your ability to trust what you see and hear.

The Money Problem

Let's start with financial fraud because that's where the rupees are. Voice-cloning deepfakes have become the weapon of choice for high-value corporate fraud in India. The setup is disturbingly simple. An attacker scrapes a few minutes of audio from a CEO's earnings call, a public speech, or even a LinkedIn video. AI models — many of which are freely available online — clone the voice with enough accuracy to fool someone who speaks to that person regularly. Then comes a phone call, or increasingly a video call using real-time face-swapping, where the "CEO" urgently instructs someone in finance to process a wire transfer. In a case that made headlines in late 2025, a Bangalore-based SaaS company lost Rs 1.3 crore when their CFO received a video call from what appeared to be the founder, instructing an emergency vendor payment. The "founder" was entirely synthetic. The CFO had spoken to the real founder hundreds of times and noticed nothing wrong during the four-minute call.

Smaller-scale deepfake fraud is hitting regular people too. There's a scam pattern that emerged on WhatsApp around mid-2025 where victims receive video calls from contacts whose faces and voices have been cloned. The fake contact asks for an urgent UPI transfer — a medical emergency, a stranded-while-traveling scenario, that kind of thing. The quality isn't always perfect, but a pixelated WhatsApp video call provides cover for minor visual artifacts. People trust video calls more than voice calls, which is exactly the psychological lever the attackers are pulling. Reports from cyber crime cells in Mumbai, Delhi, and Hyderabad suggest these cases are increasing month over month, but there aren't official statistics because there's no separate reporting category for deepfake-enabled fraud.

Elections, WhatsApp, and the Misinformation Firehose

India has 500+ million WhatsApp users. That number alone should terrify anyone thinking about deepfake-powered misinformation. During the 2024 state elections, deepfake videos of politicians making inflammatory statements circulated widely. One particularly convincing clip showed a sitting chief minister apparently admitting to corruption — it was entirely fabricated, generated using publicly available footage and an open-source face-swapping tool. The video racked up millions of views across WhatsApp, Twitter, and YouTube before fact-checkers at AltNews and BoomLive identified it as synthetic. By then, the damage was done. The correction never travels as far as the lie, and that asymmetry is exactly what makes deepfakes so potent for political manipulation.

What makes India uniquely vulnerable is the combination of massive WhatsApp adoption, relatively low digital literacy in rural areas, and the emotional intensity of political discourse. A deepfake doesn't need to be perfect to be believed — it just needs to confirm what someone already wants to think about a political opponent. The threshold for "good enough" is much lower than you'd expect.

The Gender Dimension Nobody Talks About Enough

This is the part that makes me angriest. Women in India are being targeted with deepfake intimate imagery at a scale that's staggering and almost entirely unpunished. The technology to superimpose someone's face onto explicit content has become so accessible that it doesn't require any technical skill — apps and Telegram bots do it with a single photograph as input. A selfie from Instagram, a profile picture from LinkedIn, a photo from a college WhatsApp group — any of these is sufficient raw material.

The victims are college students, professionals, journalists, activists, ex-girlfriends, teachers — basically any woman with a photograph available anywhere online. The deepfake images or videos are then used for blackmail, harassment, revenge, or simply distributed for the creator's amusement. Filing a complaint means going to a police station, explaining deepfake technology to officers who may have never heard of it, providing copies of the explicit material as evidence (which is itself a traumatic and humiliating process), and then waiting months or years for an investigation that may go nowhere. Most victims stay silent. The ones who speak up face an additional burden of "why did you have photos online?" as if the mere existence of a photograph constitutes consent to its weaponization.

The Internet Freedom Foundation and IT for Change have both documented cases and pushed for legislative action. Some progress has been made — MeitY's advisories in 2024 and 2025 technically require platforms to remove reported deepfakes within 36 hours and label AI-generated content. But "technically require" and "actually enforce" are very different things, and the platforms know it.

Spotting Fakes (For Now)

Current deepfakes still have tells, though the window for human detection is closing fast. Unnatural blinking — either too frequent, too slow, or strangely timed — remains a common artifact. Lip-sync mismatches, especially on consonant sounds like "b," "m," and "p" that require specific lip shapes, can reveal synthetic audio-visual pairing. Look at the edges of the face, particularly around the jawline, hairline, and ears — blending artifacts often appear as slight blurring or color mismatches in those areas. Teeth are surprisingly hard for AI to render consistently, so watch for rows of teeth that look too uniform or that flicker between frames. Earrings, glasses frames, and hair strands crossing the face often cause glitches because the model struggles with fine details at the boundary between the swapped face and the original image.

Lighting is another giveaway. If the shadows on a person's face don't match the lighting in the rest of the frame — light coming from the left on the face but from the right on the background — that's a strong indicator of manipulation. The background itself sometimes shows subtle warping or distortion around the subject's head as the face-swap algorithm slightly deforms surrounding pixels.

But I need to be honest: these detection methods have a shelf life. Each new generation of deepfake models fixes the artifacts of the previous one. The blinking problem was a reliable indicator in 2023; by late 2025, most sophisticated deepfakes handle it correctly. We're in an arms race where human visual detection is losing. Tools are going to matter more than eyes going forward.

Detection Tools That Actually Work (Mostly)

Microsoft's Video Authenticator analyzes frames for digital fingerprints of manipulation and outputs a confidence score. It isn't publicly available as a consumer tool, but some news organizations and platforms have access. Deepware Scanner is a free mobile app that can analyze short video clips for deepfake indicators — it's not perfect, but it catches a good percentage of lower-quality fakes. The InVID/WeVerify browser extension, used by journalists and fact-checkers, provides reverse image search, metadata analysis, and video fragment checking that can help trace the origin of a suspicious clip. Sensity AI operates a commercial deepfake detection platform that some Indian enterprises have started using for executive communication verification.

None of these tools are foolproof. The detection accuracy degrades against state-of-the-art deepfakes, and the lag between new generation techniques and updated detection models is measured in months. But using them is better than relying on your eyes alone, especially for content that could influence financial decisions or political opinions.

What Indian Law Sort of Covers

There is no deepfake-specific law in India. That sentence should be unacceptable in 2026, but here we are. What exists is a patchwork of provisions that were written for a pre-AI world and are being stretched to cover deepfake harms. Section 66E of the IT Act punishes capturing and publishing private images without consent — originally meant for voyeurism, it's been invoked in some deepfake intimate imagery cases. Sections 67 and 67A cover publication of obscene and sexually explicit material electronically. Section 500 of the Indian Penal Code (now Bharatiya Nyaya Sanhita Section 356) addresses defamation, which can theoretically apply to reputation-damaging deepfakes. MeitY's advisories from 2024-2025 require intermediaries to take down AI-generated content when reported, but advisories aren't legislation — they lack the enforcement teeth of a statute.

The Parliamentary Standing Committee on IT discussed deepfakes in a session in late 2025, and several members expressed concern, but no bill has been introduced. India needs a dedicated statute that criminalizes the creation and distribution of malicious deepfakes, establishes expedited takedown procedures, provides clear remedies for victims, and imposes obligations on platforms to proactively detect and label synthetic media. We're nowhere close to having that.

Protecting Yourself (The Limits of Individual Action)

The standard advice is to reduce your public digital footprint — make your Instagram private, remove photos from public profiles, be careful about video content you share. That advice is correct and also insufficient. You can't control photos that others post of you, images captured by CCTV or event photographers, or old content that's already been cached and scraped. But reducing the volume of high-quality facial imagery available to an attacker does raise the difficulty bar. A deepfake made from a single blurry photo is much less convincing than one trained on fifty high-resolution images from different angles.

For organizations, the practical defense against voice-cloning fraud is implementing verification protocols that don't rely on recognizing someone's voice or face. Code words, callback verification through a separately stored phone number, dual authorization for financial transactions — these processes are annoying and slow and they work. Any organization that allows a single video call to authorize a large financial transfer is asking to be deepfaked.

The cost of creating a deepfake has collapsed. In 2022, producing a convincing face-swap video required significant computing resources and technical knowledge. By early 2026, Telegram bots will do it for a few hundred rupees. Apps like FaceSwap, Reface, and dozens of Chinese-developed tools that circulate on APK download sites require nothing more than a smartphone and a single photograph. Voice cloning tools like ElevenLabs can produce a convincing voice replica from as little as 30 seconds of audio — the kind of sample easily obtained from a YouTube video, a WhatsApp voice note, or a podcast appearance. The barrier to entry has essentially been removed, which means the threat isn't coming only from sophisticated criminal operations but from literally anyone with a grudge, a phone, and fifteen minutes of free time. That democratization of harmful capability is something existing legal frameworks were not designed to address.

For individuals, the most useful habit is simple skepticism. If someone you know sends you a video asking for money or making an unusual request, call them back on a number you already have saved — not the number from the message. If a political video seems designed to provoke outrage, check whether it's been verified by AltNews, BoomLive, or the platform's fact-checking partners before sharing it. If it feels engineered to make you furious, that's probably because it was.

The regional language dimension makes India's deepfake problem worse than the statistics suggest. Most detection tools and media literacy resources are available only in English. Deepfake content circulating in Hindi, Tamil, Telugu, Kannada, and Bengali faces much less scrutiny because fact-checking organizations have limited regional language capacity. A deepfake video in Tamil might circulate for days on ShareChat or WhatsApp groups in Tamil Nadu without anyone flagging it, simply because the people who might detect it aren't monitoring content in that language.

Education is a piece of this that's barely getting attention. Indian schools and universities don't teach media literacy in any systematic way. The ability to evaluate whether a video or image might be manipulated isn't a skill most people have, because nobody taught them. When deepfake content circulates on WhatsApp family groups — and it does, constantly — the older members of those groups are the least equipped to evaluate it and the most likely to share it further. A 2025 survey by DataLEADS found that adults over 50 in India were three times more likely to share unverified video content on WhatsApp than adults under 30. That's not because older Indians are less intelligent — it's because they grew up in a media environment where video was naturally trustworthy. If you saw it on camera, it happened. That assumption, which served people well for decades, is now a liability. Digital literacy programs in India, to the extent they exist at all, focus on basic smartphone usage and internet safety. They don't cover synthetic media, AI generation, or the cognitive habits needed to evaluate manipulated content.

The corporate response to deepfakes has been inadequate at the platform level too. WhatsApp's end-to-end encryption means the company can't scan messages for deepfake content even if it wanted to. Instagram and YouTube have content labeling requirements for AI-generated material, but enforcement depends on creators self-labeling, which malicious actors obviously won't do. Twitter/X under its current management has gutted trust and safety teams and is probably the worst major platform for deepfake moderation. Facebook has its own detection systems but they're tuned for English-language content and perform significantly worse on content in Indian languages, which is where a lot of the harmful deepfakes circulate. The platforms have financial incentives to keep engagement high and moderation costs low, and deepfake content — because it's provocative and emotionally charged — drives engagement. There's a misalignment between what's good for platform metrics and what's good for society, and so far the platforms are optimizing for metrics.

What bothers me most about this whole topic, honestly, is how it's going to interact with the erosion of trust more broadly. We're heading toward a world where any video or audio can be plausibly dismissed as a deepfake, and that cuts both ways. Real evidence of real wrongdoing can be waved away with "that's obviously AI-generated." Politicians caught on camera can claim fabrication. Evidence in court becomes contestable in new ways. The deepfake problem isn't just about the fakes that get created — it's about the doubt that creeps into everything that's genuine. And I don't think India, or anywhere else for that matter, has figured out how to deal with that. The technology is sprinting ahead, the policy response is walking, and the gap between them is where people get hurt. Probably the most worrying part is that I'm not sure the gap is even closeable at this point, because the tools for creating deepfakes will always be more accessible and cheaper than the tools for detecting them. That asymmetry might just be the new normal we all learn to live with, and I find that thought genuinely depressing.

VS

Written by

Vikram Singh

Cybersecurity Consultant

Vikram Singh is a certified ethical hacker and cybersecurity consultant who has helped secure systems for major Indian banks and government agencies. He writes about practical security measures for everyday Indian internet users.

Found this article helpful? Share it!

Share:

Related Posts

Comments (0)

Leave a Comment

Loading comments...