Monthly Privacy Roundup: Key Updates from February 2026
February 2026 was a busy month for privacy in India — a fintech breach exposed 2.3 million records, the Data Protection Board got its full bench, and UPI fraud numbers got worse. Here's what happened.

Another month, another batch of privacy developments that most people won't hear about until something directly affects them. February 2026 had a lot going on — some of it genuinely significant, some of it the kind of bureaucratic progress that only matters if you're paying close attention. I'll try to separate the two.
Major Regulatory Developments
The Data Protection Board of India finally has a full bench. This happened in the first week of February, and it's the kind of news that sounds dry until you realize what it means. As the body that's supposed to enforce the Digital Personal Data Protection Act — hear complaints, investigate violations, impose penalties — the DPBI has been essentially non-functional since the DPDPA was enacted in 2023 because the government took its time appointing members. Now the bench is in place, and from what I can tell, organizations handling Indian user data have a six-month compliance window from the date of operationalization. That puts the effective deadline somewhere around August 2026. Companies that haven't started DPDPA compliance — and based on industry surveys, a startling number of mid-sized Indian businesses haven't — now have a concrete timeline. Whether the Board will actually impose meaningful penalties or operate more as a symbolic body is the question everyone in privacy circles is asking. Nobody has the answer yet.
MeitY finalized the Consent Manager framework. This one flew under the radar for most people, which is unfortunate because it might end up being one of the more practically impactful parts of the DPDPA infrastructure. Consent Managers are entities that will act as intermediaries between you (the data principal) and the companies collecting your data (data fiduciaries). Instead of managing your data consent preferences separately on every platform — which nobody actually does — you'd use a Consent Manager to set your preferences once and have them applied across multiple services. Think of it as a privacy dashboard that sits between you and the companies that want your data. MeitY's final rules establish registration requirements, technical standards, and operational obligations for these entities. Several companies have already applied for registration, including OneTrust's India operation and a new venture with backing from Infosys. Whether Consent Managers will actually gain adoption or become another layer of bureaucracy depends on how user-friendly the implementations are. I think the concept is sound, but execution, as with many Indian tech governance initiatives, remains to be seen.
TRAI published a consultation paper on telecom data retention, and the findings were not great. The Telecom Regulatory Authority of India decided to look into how long telecom operators — Jio, Airtel, Vi, BSNL — hold onto your data. Call detail records, location data, browsing metadata from mobile internet usage, SMS logs. What the consultation paper revealed, though it probably shouldn't have surprised anyone, is that major operators retain this data for periods far exceeding any operational necessity. Some categories of data were being held for years. Stated reasons ranged from "legal compliance" to "network optimization" to vague references to "business requirements." Privacy advocates, including the Internet Freedom Foundation and the Software Freedom Law Centre, submitted comments arguing that retention periods should be aligned with DPDPA's purpose limitation principle — data should be deleted once the specific purpose for which it was collected has been fulfilled. TRAI hasn't issued its recommendations yet. The telecom industry will probably push back hard against shorter retention limits because the data has commercial value for analytics and ad targeting. How TRAI balances industry interests against privacy principles will say a lot about the regulatory direction India's heading in.
A major fintech breach exposed data on roughly 2.3 million users. The company — a digital lending platform whose name I'll note was widely reported in Indian tech media — suffered a breach that exposed personal and financial details including Aadhaar numbers, PAN details, loan amounts, repayment histories, and contact information. That's enough data to enable identity theft, targeted phishing, and financial fraud at scale. CERT-In issued an advisory mandating the company notify all affected users within 72 hours, which is the standard timeline. In their initial response, the company followed the usual corporate breach playbook: we take security seriously, we're investigating, we've engaged external experts. What they didn't immediately disclose was how long the data had been exposed before discovery — a detail that often turns out to be the most damning part of these incidents. This breach has reignited a familiar debate about security standards in India's fintech sector, which grew explosively over the past five years without a corresponding increase in security maturity. Digital lending platforms, in particular, collect extraordinarily sensitive financial data and many of them are startups operating with lean engineering teams and limited security budgets. From what I can tell, the combination of high-value data and underinvestment in security is a breach waiting to happen, and it's been happening with increasing frequency.
UPI fraud numbers for Q4 2025 were released, and they're going in the wrong direction. According to the Reserve Bank of India's quarterly fraud report, UPI-related fraud cases crossed 85,000 in the October-December 2025 quarter, with total losses exceeding Rs 420 crore. Both numbers are up from the previous quarter. Social engineering remains the dominant fraud vector — not technical exploitation of UPI's security architecture, which is actually reasonably well-designed, but manipulation of the people using it. Scam patterns are familiar: fake customer care numbers that appear at the top of Google search results, screen-sharing apps that let the attacker see your OTPs in real time, "collect request" fraud where you're tricked into approving a payment request instead of receiving money, and impersonation calls from "bank representatives" asking you to install remote access apps. The RBI's report noted that awareness campaigns have had limited impact, particularly among older users and first-time UPI adopters. Social engineering techniques are getting more refined faster than the public awareness is growing. That gap is where the fraud lives.
EU-India data adequacy talks resumed. This matters enormously for India's IT services industry, even if it sounds like trade bureaucracy. A data adequacy agreement with the EU would mean that the European Commission recognizes India's data protection framework as providing an "adequate" level of protection, allowing personal data to flow from the EU to India without requiring additional safeguards like Standard Contractual Clauses. Currently, Indian IT companies handling EU data must comply with SCCs and other transfer mechanisms, which adds legal complexity and cost. TCS, Infosys, Wipro, HCL — all of them would benefit from an adequacy determination. Talks had stalled for a few years because India didn't have a data protection law at all, but the DPDPA's enactment restarted the conversation. Sticking points are predictable: the EU is concerned about the DPDPA's broad government exemptions, the lack of an independent data protection authority (the DPBI's members are appointed by the central government, raising questions about independence), and the absence of provisions equivalent to GDPR's Article 9 for special categories of data. India wants the economic benefits of adequacy without amending the DPDPA to match GDPR's strictness. Where the negotiations land will depend on how much economic bargaining power each side perceives. An adequacy decision seems unlikely before late 2026 at the earliest.
Security Breaches and Fraud Trends
The US issued a new executive order on AI and privacy. The order requires AI systems deployed by federal agencies to undergo enhanced privacy impact assessments, establishes new transparency requirements for AI training data sourcing, and creates an interagency task force on AI privacy risks. This doesn't directly affect India, but it sets norms. India's own AI regulatory framework is still in early stages — NITI Aayog and MeitY have published discussion papers but no draft legislation. Both the US order and the EU AI Act that went into partial effect in 2025 create international reference points that India will likely draw from. Indian AI companies building products for global markets will need to comply with whatever standards emerge from these frameworks regardless of Indian domestic law. Practical impact on Indian AI development might come through market access requirements rather than domestic regulation.
A few smaller items worth noting. The Supreme Court listed for hearing in March a petition challenging the expansion of Aadhaar-linked facial recognition under the DigiYatra program. According to the petitioner, facial recognition for airport boarding exceeds the original scope of Aadhaar data collection and violates the proportionality requirements laid out in the Puttaswamy judgment. MeitY is expected to release draft rules on children's data protection under the DPDPA sometime in March — the DPDPA has specific provisions requiring "verifiable parental consent" for processing children's data, but the operational rules defining what "verifiable" means haven't been published yet. EdTech companies like BYJU'S and Unacademy, which process data of millions of minors, are watching this closely. Industry compliance reports from auditing firms suggest that as of February, fewer than 30% of mid-to-large Indian companies have completed DPDPA readiness assessments — with the August deadline approaching, the next few months are going to involve a lot of scrambling.
International Privacy Developments
WhatsApp introduced a new privacy feature in India — IP address protection for calls. Rolled out in mid-February, the feature routes voice and video calls through WhatsApp's servers so the other party can't see your IP address. Previously, WhatsApp calls were peer-to-peer, which meant anyone you called (or who called you) could potentially identify your IP address and, by extension, your approximate location. Changes like this are a genuine improvement for privacy, though it means Meta's servers now process more call data than before. Whether Meta uses call metadata for any purpose beyond routing is governed by their privacy policy, which is characteristically vague on the specifics. Still, for Indian users who've faced stalking or harassment through phone calls, the IP protection is a welcome change. It's enabled by default for all users.
Notable Updates and Enforcement
India's startup ecosystem showed growing interest in privacy-tech as a business category. At least three Indian startups raised seed rounds in February for privacy-related products: a consent management platform aimed at mid-market Indian businesses trying to comply with the DPDPA, a B2B tool for automated data mapping and classification, and a consumer-facing app that aggregates and manages data deletion requests across multiple platforms. The total funding across these rounds was modest — somewhere around Rs 15-20 crore combined — but the signal matters. Indian VCs are starting to see DPDPA compliance as a market opportunity, which means the infrastructure for data protection is being built by the private sector alongside (and probably faster than) the government's enforcement machinery. Whether these startups survive long enough to matter depends on how heavily the DPBI enforces compliance. If enforcement is soft, the demand for compliance tools evaporates. If it's strict, these companies could become very valuable very quickly.
The National Crime Records Bureau released its annual crime statistics for 2025, and the cybercrime numbers deserve attention. Reported cybercrime cases in India increased 28% year-over-year, with financial fraud accounting for over 60% of all cases. Identity theft cases — where someone uses stolen personal data (often from breaches like the fintech incident mentioned above) to open bank accounts, take loans, or make purchases in another person's name — grew by 45%. The most common vector for identity theft was compromised Aadhaar and PAN data obtained from data breaches and sold on dark web marketplaces. The NCRB report also noted that conviction rates for cybercrime remain abysmally low — under 4% of reported cases result in conviction, compared to around 50% for violent crimes. The gap reflects both the technical challenges of investigating cybercrime and the chronic under-resourcing of cyber crime cells across India. Most state-level cyber crime units are staffed by officers with minimal technical training and outdated forensic tools. The gap between the sophistication of attacks and the capacity to investigate them is wide and growing.
Google announced changes to its Inactive Account Policy affecting Indian users. Starting April 2026, Google will delete accounts that have been inactive for two years, including all data in Gmail, Drive, Photos, and other services. The policy was announced globally in 2023 but the enforcement timeline was extended multiple times. For Indian users, this means that old Gmail accounts you might have used to sign up for services — and which might contain personal documents, photos, or communication history — could be permanently deleted if you haven't logged in recently. The privacy angle cuts both ways: automatic deletion reduces the amount of stale personal data sitting on Google's servers, which is arguably good for privacy. But it also means data loss for people who don't check these accounts regularly. If you have old Google accounts, log into each one before the April deadline to reset the inactivity clock. The more interesting privacy implication is that this might be the first time a major tech company implements something resembling automatic data erasure at scale — a kind of corporate right-to-be-forgotten imposed unilaterally by the data fiduciary rather than requested by the data principal.
Platform Privacy Changes
A quick note on something happening in Indian schools that connects to privacy. Several state governments — Karnataka, Maharashtra, and Madhya Pradesh — are piloting AI-powered classroom monitoring systems that use cameras and software to track student attention, behavior, and engagement. The stated goal is improving educational outcomes. The privacy implications for minors being continuously surveilled during school hours are significant and largely undiscussed. These systems collect biometric data (facial analysis), behavioral data (attention patterns, movement), and emotional data (sentiment analysis based on facial expressions). The DPDPA's provisions on children's data haven't been operationalized through specific rules yet, so these pilot programs exist in a regulatory grey zone. Parents are typically informed through consent forms buried in school admission paperwork. It's hard to overstate how much sensitive data about minors these systems generate, or how poorly understood the long-term implications are.
The pattern across all of February's developments is the same one we've been seeing for the past two years: India's data protection framework is moving from paper to practice, slowly and unevenly. The Board has people. The Consent Manager framework has rules. The compliance deadline has a date. But enforcement remains theoretical, breaches continue to expose millions, fraud numbers are climbing, and the gap between what the law promises and what citizens experience remains wide. The machinery is being assembled. Whether it'll actually run — and whether it'll run in the public interest rather than being captured by the entities it's supposed to regulate — is a different question.
Emerging Privacy Concerns
One last thing: the Indian Computer Emergency Response Team (CERT-In) published its annual statistics for 2025 in the final week of February, showing that it handled over 1.4 million cybersecurity incidents during the year — a 33% increase from 2024. The largest categories were phishing, malware propagation, and unauthorized access to IT systems. CERT-In also noted a significant increase in attacks targeting healthcare data systems and educational institutions, both of which hold large volumes of personal data with often inadequate security infrastructure.
Your one concrete action after reading this: check whether any of your data was affected by the fintech breach. If you've used any digital lending platform in the past two years, log into the platform's app, look for any breach notification, and change your password regardless. If you can't find clear communication from them about it, assume the worst and monitor your credit report through CIBIL for the next few months.
Written by
Rajesh KumarFounder & Chief Editor
Rajesh Kumar is a cybersecurity expert with over 12 years of experience in digital privacy and data protection. He has worked with CERT-In and various Indian enterprises to strengthen their data security practices. He founded PrivacyTechIndia to make privacy awareness accessible to every Indian.
Related Posts
The Privacy Impact of India Stack and Digital Public Infrastructure
India Stack is brilliant engineering. It's also the most extensive personal data infrastructure any democracy has ever built. Holding both of those thoughts at once is where the interesting conversation starts.
Children's Online Privacy: What DPDPA Says About Minors' Data
A ten-year-old in Pune opens a gaming app and taps 'I agree' without reading a word. India's DPDPA 2023 says that shouldn't count as consent. But does the law actually protect kids, or does it just look good on paper?
Right to Be Forgotten: Does India Recognize It?
Something you did ten years ago still shows up on Google when someone searches your name. Should you have the right to make it disappear? India's answer is complicated, evolving, and worth understanding.


