The Lucknow Case and a Pattern Across India
On the morning of October 14, 2025, Ramesh Chandra Tripathi, 67, received a video call from what looked exactly like his son.
The caller appeared to be Ankit Tripathi, 34, who works as a software engineer in Bangalore. The face on the screen was his. The voice was his. The caller said he had been involved in a car accident on the Outer Ring Road and that the other driver's family was threatening to file a police case unless a settlement was paid immediately. The voice carried the same slight stammer that Ankit has had since childhood. It said “Papa” with the stress on the second syllable, the way Ankit always does. Ramesh Chandra Tripathi transferred 3.2 lakh rupees within eight minutes. When a second caller, claiming to be the other driver's brother, demanded more, he transferred another 3.6 lakh.
Forty minutes later, Tripathi called his son directly. Ankit answered from his office desk. There had been no accident. He had been in a meeting during the entire duration of the call his father received.
The Lucknow police cyber cell traced the origin of the call to a VoIP number routed through a server in Southeast Asia. The voice used on the call had been generated by an AI model trained on clips pulled from Ankit Tripathi's Instagram Reels, where he posted short cooking videos two or three times a week. In those Reels, he typically spoke for 10 to 15 seconds per clip. According to investigators, the scammers needed approximately 30 seconds of usable audio to produce a voice clone convincing enough to fool a family member over a phone connection.
In January 2025, the UP cyber cell had logged three AI voice-cloning scams for the entire month. By October, the monthly count had reached 14 confirmed cases in Uttar Pradesh alone, with an unknown number unreported because victims did not realise AI was involved or were too embarrassed to file a complaint. Nationally, the Indian Cyber Crime Coordination Centre (I4C) reported a 280 percent increase in AI-assisted fraud complaints between 2023 and 2025, with the sharpest rise occurring between March 2025 and September 2025.
The average financial loss per deepfake scam is higher than in traditional phone fraud. A conventional scam call from someone claiming to be a bank official or a police officer trips a set of suspicion instincts that many people, especially older adults, have developed through years of warnings. A call from your child's voice, with your child's face, in apparent distress, bypasses those instincts entirely. The emotional response comes before the analytical one.
The pattern across reported cases is consistent. Scammers identify a target, usually an older parent living in a different city from their adult children. They locate voice or video samples on social media. They generate a synthetic version of the child's voice using commercially available AI tools. They call with a fabricated emergency. The pressure of hearing a family member in crisis overrides the impulse to hang up and verify. By the time the victim checks, the money is gone.
How Cheap the Technology Has Become
During February of last year, a security researcher at IIT Delhi demonstrated live voice cloning using a free, browser-based tool that required no software installation. He spoke into his laptop microphone for twelve seconds, fed the audio into the tool, and within ninety seconds had a synthetic version of his voice reading a paragraph of Hindi text he had typed. The output was not perfect. There was a faint metallic quality on certain vowels, and the breathing rhythm was slightly too even. But played over a phone speaker, on a typical Indian mobile connection with its compressed audio quality, the imperfections were barely noticeable.
That demonstration used a tool built on an open-source voice synthesis model. The model's code is freely available on GitHub. Anyone with basic technical knowledge can download it, run it, and produce a voice clone in under two minutes. More polished commercial tools with better output quality charge between five and twenty US dollars per month. Some offer a free tier.
The science behind voice cloning is called speaker embedding. An AI model analyses a short audio sample and extracts the unique characteristics of a person's speech: pitch range, cadence, rhythm, how they pronounce specific vowels, the gaps between their words, the way they breathe between sentences. The model then maps these characteristics onto any text input. Type a sentence, and the system speaks it in the target's voice. Research published by the speech processing group at IIIT Hyderabad in 2024 found that models trained on as little as five seconds of clear audio fooled human listeners 68 percent of the time. With 30 seconds of sample audio, the deception rate climbed to 85 percent. A follow-up study published the following March showed newer models achieving 78 percent accuracy with just five seconds, an improvement of 10 percentage points in a single year.
Deepfake video has followed the same cost trajectory. In 2019, producing a convincing face-swap video required expensive GPU hardware, specialised software, and hours of processing time. By mid-2025, real-time face-swapping applications run on consumer laptops with mid-range graphics cards. Tools like FaceSwap, DeepFaceLive, and several web-based alternatives can replace a person's face during a live video call. The software takes a set of reference photos of the target face and maps it onto the user's face in real time, adjusting for head movement, lighting, and expression. On a video call with compressed quality, the kind typical on Jio or Airtel mobile data, the result is difficult to distinguish from a genuine call.
Lensa AI and similar consumer apps introduced millions of people to face manipulation in 2022 and 2023 through “magic avatar” features. Those apps were designed for entertainment, but the underlying technology is identical to what is used in scams. The consumer-friendly packaging normalised the concept of AI face generation and made the tools accessible to people with no technical background at all.
In August 2025, a cybersecurity firm in Pune tested six freely available deepfake tools against a panel of 200 human evaluators. The evaluators were shown pairs of videos, one real and one deepfake, and asked to identify which was synthetic. For the three best-performing tools, evaluators correctly identified the deepfake only 54 percent of the time, barely better than random chance. The test used 720p video played on a smartphone screen, simulating the conditions of a typical WhatsApp video call.
“We are seeing a shift from mass-spray phishing to individually tailored social engineering. AI has made it economically viable to craft unique scam messages for individual targets, which was previously only feasible for state-level espionage operations.” — Superintendent Triveni Singh, UP Cyber Cell, speaking at CyberSafe India 2025
AI-generated text has removed the last remaining barrier that once protected people from sophisticated scams: bad grammar. Phishing messages that used to be riddled with spelling mistakes and awkward phrasing can now be produced in fluent Hindi, Tamil, Bengali, Marathi, Telugu, Kannada, or any other Indian language. The AI adjusts tone, formality, and regional dialect. A scam email pretending to be from the State Bank of India can read exactly like genuine SBI communications. A WhatsApp message claiming to be from a colleague can match that person's writing style if the scammer has samples from a public group chat or a breached message database. In July 2025, Mumbai police reported a case where a scam WhatsApp message was so convincingly written in the target's colleague's style that even after being told it was fraudulent, the victim initially did not believe it.
Documented Cases Across States
The Lucknow incident is not an isolated event. A survey of public police reports, FIRs, and court filings from 2024 and 2025 shows deepfake and AI-assisted scams surfacing across Indian states with varying methods but a consistent outcome: victims who were convinced they were interacting with someone they knew, and who discovered the deception only after money had been transferred.
Kozhikode, Kerala, March 2024. Suresh Menon, 52, received a video call from what appeared to be a former colleague named Javed, who had moved to Dubai three years earlier. The face on the screen matched. The voice matched. Javed said he needed 40,000 rupees urgently to cover a visa renewal fee and that his bank accounts were temporarily frozen due to a compliance check. Menon transferred the amount through Google Pay. When he later messaged the real Javed on WhatsApp to ask if the visa issue was sorted, Javed said he had never called. Kozhikode police traced the video to a face-swapping application that had used photos from Javed's Facebook profile as source material. The photos were public.
Hyderabad, Telangana, June 2024. Cyberabad police reported seven cases in a single month involving parents who received AI-cloned voice calls that appeared to come from their adult children. In each case, the fabricated story involved either a road accident or a police detention. The amounts demanded ranged from 50,000 rupees to 4 lakh. Investigators found that voice samples had been scraped from YouTube videos, Instagram Reels, and in two instances from WhatsApp voice notes that had been forwarded in family group chats. The voice notes were as short as four seconds.
Chandigarh, Punjab, December 2024. A retired army colonel received a call from someone who sounded exactly like his nephew stationed at a military base in Rajasthan. The caller said there had been an accident during a training exercise and that 2.5 lakh rupees was needed immediately for medical treatment at a private hospital because the military hospital was too far. The colonel transferred the money through NEFT. The nephew, when contacted, was uninjured and unaware of any call. Police traced the voice sample to a YouTube video from a regimental reunion where the nephew had given a short speech.
Mumbai, Maharashtra, September 2025. A mid-level finance executive named Prashant Deshpande at a logistics company transferred 1.8 crore rupees to a fraudulent bank account after attending what he believed was a legitimate video call with his company's Singapore-based managing director. The call came from a spoofed number displaying the director's actual caller ID. The deepfake showed the director's face and replicated his voice. Two other company executives appeared to be on the call as well, all AI-generated. Deshpande followed what he understood as a direct instruction to process an urgent vendor payment. The fraud was discovered four days later when the real managing director questioned a discrepancy in the monthly accounts. Police froze the receiving account within 48 hours of the complaint, recovering about 40 percent of the amount.
Delhi NCR, October 2025. A network of investment scams used deepfake videos of recognisable public figures to promote a cryptocurrency trading platform. The videos showed a well-known industrialist and a former cricket captain appearing to endorse the platform and describe their own profits. The videos were distributed through WhatsApp forwards and ran as paid YouTube advertisements for approximately ten days before being flagged and removed. During that window, the platform collected deposits from over 2,000 investors. Delhi police estimated total losses at roughly 12 crore rupees. Several victims told investigators they had trusted the investment because they had seen the endorsement “with their own eyes.”
Bengaluru, Karnataka, November 2025. A 28-year-old software engineer filed an FIR after receiving deepfake images of herself, generated from publicly available photos on her Instagram profile, attached to an extortion email demanding 5 lakh rupees. The images had been created using an open-source AI tool that required only five to eight face photos as input. The tool is freely downloadable and has been available since 2023. Bangalore cyber crime officials stated that complaints of similar deepfake-based extortion had tripled over the preceding six months, with women between the ages of 18 and 35 and college students making up the majority of victims.
Jaipur, Rajasthan, November 2025. A 45-year-old businessman transferred 7 lakh rupees after receiving a video call that appeared to be from his business partner in Ahmedabad. The partner's face was visible on screen, and the voice asked for an emergency fund transfer to cover a shipment delay penalty. The real business partner was at home watching television when the call was made. Police found that the deepfake had been generated using photos from the partner's LinkedIn profile and a voice sample from a publicly accessible YouTube interview he had given to a trade publication in 2024.
Detection, Prevention, and Why the Law Is Behind
India does not have legislation that specifically addresses deepfake fraud. Existing criminal statutes are being stretched to cover cases they were never written for. Voice cloning scams are typically prosecuted under Section 66D of the Information Technology Act, 2000 (cheating by personation using a computer resource) and Sections 318 and 319 of the Bharatiya Nyaya Sanhita, 2023 (cheating and fraud). Deepfake pornography and image-based extortion cases invoke Section 67 of the IT Act (publishing obscene material electronically) alongside provisions for criminal intimidation. None of these statutes were drafted with AI-generated synthetic media in mind, and courts have had to interpret them broadly to fit the facts of each case.
The Digital Personal Data Protection Act of 2023 governs consent-based data collection but does not address the creation or distribution of synthetic media made from someone's likeness or voice. A person's face and voice are not treated as “personal data” in the way that a phone number or Aadhaar number is, which creates a gap. In November 2023, the Ministry of Electronics and Information Technology issued an advisory to social media platforms directing them to remove deepfake content and label AI-generated material, but the advisory is non-binding. Compliance has been inconsistent. Platforms remove flagged content reactively, but there is no requirement for them to detect deepfakes before they spread.
Detection technology does exist. Tools like Microsoft Video Authenticator and Intel FakeCatcher can analyse video for signs of AI manipulation: inconsistencies in blinking patterns, skin texture artifacts at the boundary between face and background, misaligned lighting on different parts of the face, and unnatural smoothness around the jaw and ears. Open-source projects like FaceForensics++ offer similar analysis capabilities. For audio, spectral analysis can sometimes identify a cloned voice by examining frequency patterns and micro-variations in pitch that current AI models do not replicate with full accuracy.
But these detection tools share a common limitation: they require the user to already suspect that something is fake and to submit the media for analysis after the fact. They do not work in real time. During a live phone call where a father is hearing his son's voice asking for help, there is no tool that can intercept the audio and flag it as synthetic before the damage is done. Detection is retrospective. Deception is instantaneous.
Some platforms have started building defences. WhatsApp now labels forwarded messages more prominently and limits bulk forwarding to reduce the spread of viral deepfake videos. YouTube began requiring creators last September to disclose when uploaded content is AI-generated, though compliance depends on self-reporting and there is no automated check. Google and Meta have announced plans to embed watermarks in AI-generated content produced by their own tools (Google's Imagen and Meta's Make-A-Video), but these watermarks apply only to content created through their platforms. Content generated by third-party tools, open-source models, or the tools most commonly used in scams carries no watermark at all.
Fact-checking organisations in India have added deepfake verification to their workflows. Alt News, BOOM, and Vishvas News regularly analyse and debunk viral AI-generated videos of politicians, business figures, and celebrities. Their work is valuable but structurally limited. They reach users who already follow them or who actively seek verification. The person who receives a deepfake video through a WhatsApp forward from a trusted uncle and watches it without questioning it is not going to pause and check a fact-checking website.
The I4C operates a national helpline at 1930 and an online portal at cybercrime.gov.in for reporting cyber fraud, including deepfake scams. Response times vary by jurisdiction and by the complexity of the case. In the Mumbai corporate fraud case, the complaint and account freeze happened within 48 hours, recovering a portion of the stolen money. In several of the Hyderabad parent-targeting cases, the funds had already been laundered through multiple bank accounts and converted to cryptocurrency before the FIR was filed. Speed of reporting matters, and most victims do not report quickly enough because they spend hours or days processing what happened before approaching the police.
Prevention, for now, falls on individuals. The single most effective personal defence is a pre-agreed family code word or question. If a family member calls with an emergency, ask for the code word before transferring any money. If the caller cannot provide it, hang up and call the family member directly on their known number. This is a low-technology solution to a high-technology problem, but it works because it exploits the one thing the AI cannot know: a piece of private information shared only within the family.
Other personal precautions include reducing the amount of voice and video content posted publicly on social media. Instagram Reels, YouTube Shorts, and public TikTok videos are the primary sources of voice samples used in cloning scams. Making social media profiles private limits the pool of available source material. Parents and older relatives, who are the most frequent targets of these scams, should be told directly and specifically that AI can now clone voices convincingly enough to sound like their children. Many older adults in India are not aware that this technology exists, let alone that it is free and widely accessible.
As of late 2025, the situation is this: the tools to create convincing deepfakes are free, fast, and available to anyone with a laptop. The tools to detect them are expensive, slow, and unreliable. Indian law has no specific provision for deepfake crimes. The gap between offense and defense is widening, and there is no sign of it closing.
Comments (0)