Children's Online Privacy: What DPDPA Says About Minors' Data
A ten-year-old in Pune opens a gaming app and taps 'I agree' without reading a word. India's DPDPA 2023 says that shouldn't count as consent. But does the law actually protect kids, or does it just look good on paper?

A ten-year-old in Pune picks up his mother's phone after school. He opens a free gaming app — one of those battle royale clones that's wildly popular among kids in his class. A popup appears: a wall of text asking him to accept terms and conditions. He taps "I agree" without reading a single word. In doing so, he's just consented to the collection of his device ID, his location, his playtime patterns, his in-app behaviour, and the sharing of all of this with a network of advertising partners he'll never know about. He's ten. He wanted to play a game.
This scene plays out millions of times a day across India, and it's the reason the Digital Personal Data Protection Act (DPDPA) 2023 includes specific provisions about children's data. On paper, these provisions are some of the strongest in the world. The question I keep coming back to — and I don't think it has a comfortable answer — is whether any of it actually works in practice.
What the Law Says
Section 9 of the DPDPA sets a clear rule: before any company processes the personal data of a child (defined as anyone under 18), they must get verifiable consent from the child's parent or lawful guardian. Not the child's consent — the parent's. And not just a checkbox — it needs to be verifiable, meaning the platform must have some way of confirming that the person giving consent is actually a parent and not a twelve-year-old pretending to be one.
That's a high bar, and I'm skeptical that most platforms are clearing it. How exactly does a free gaming app verify that the person tapping "I'm a parent and I consent" is who they claim to be? Credit card verification? That excludes the large chunk of Indian parents who don't have credit cards. Aadhaar-based verification? That introduces a whole new set of privacy concerns — you'd be collecting a parent's biometric identity data just to let a kid play a game. The DPDPA leaves the specifics of verification to the rules, which are still being finalised in early 2026. So we've got a strong legal requirement with no clear mechanism for meeting it. That's a gap you could drive a truck through.
The Ban on Tracking Kids
Here's the provision that, on paper, should have sent shockwaves through India's app industry: the DPDPA specifically prohibits behavioural monitoring of children and targeted advertising directed at children. Read that again. No tracking. No behavioural profiling. No targeted ads. For anyone under 18.
Think about what that means for apps like BGMI (Battlegrounds Mobile India), Free Fire, Roblox, and the countless casual games that Indian kids play for hours daily. These apps are free because they're funded by advertising. The advertising is targeted because user data is collected and profiled. If you remove the ability to track and target children, the revenue model for these apps collapses — or at least completely changes. The same goes for social media platforms that Indian teenagers use: Instagram, Snapchat, YouTube, ShareChat. Their business depends on knowing what keeps users engaged and serving ads accordingly.
Now, has any platform actually stopped tracking Indian children since the DPDPA passed? I haven't seen evidence of it. Instagram still serves personalised ads to teenage accounts in India. Gaming apps still collect device IDs and gameplay telemetry from users who are obviously minors (based on their behaviour patterns, if nothing else). The law exists. Compliance is... theoretical.
The "Detrimental Effect" Clause and Why It's Interesting
There's a broader provision in the DPDPA that prohibits processing children's data in any way that's likely to have a "detrimental effect on the well-being of the child." This is deliberately vague, and I think that's by design. It's a catch-all meant to cover things like addictive design patterns (infinite scroll, loot boxes, streak mechanics), manipulative dark patterns that trick kids into sharing data, and any data practice that could harm a child's mental health or development.
The vagueness is both the strength and the weakness. It gives regulators broad authority to act against practices that are clearly harmful. But it also means nobody knows exactly where the line is until the Data Protection Board starts issuing decisions, and the Board is still being operationalised. We're in a weird liminal space where the law says "don't harm kids with data" but there's no enforcer yet to define what "harm" means or to punish companies that cross the line.
Age Verification: The Unsolved Problem
Everything in the DPDPA's children's provisions hinges on knowing whether a user is under 18. And this is where the whole framework gets wobbly, because reliable age verification is a problem that nobody in the world has solved well.
The simplest approach — asking users to enter their date of birth — is laughable. Every kid who's ever been told they need to be 13 to use Instagram has just picked a birth year that makes them old enough. Self-declaration isn't verification. It's theatre.
ID-based verification (linking to Aadhaar, for instance) would be more reliable but creates new privacy risks. You'd be requiring children — or their parents — to hand over government identity documents to private companies just to sign up for apps. That's a massive collection of sensitive data that becomes a honeypot for attackers. A breach of an age-verification database would be catastrophic.
AI-based age estimation (analysing a selfie to guess age) is being tried in some markets, but the accuracy is inconsistent, it raises biometric data concerns, and it feels invasive. You want to use a notes app, so first, let the app scan your face to confirm you're old enough.
The UK's approach under the Age Appropriate Design Code has been to push age verification responsibility onto platforms and let them figure out the mechanism. India seems to be heading in a similar direction, but without the enforcement history that the UK's ICO brings to the table. The result, for now, is that most platforms operating in India do minimal or no age verification and hope nobody calls them on it.
EdTech: Where the Problem Is Most Acute
If there's one sector where children's data protection should be a screaming priority in India, it's edtech. Apps like BYJU'S, Unacademy, Vedantu, Toppr, and dozens of smaller players have collected extraordinarily detailed data on millions of Indian children: what subjects they struggle with, how long they study, when they get distracted, what their test scores are, and how their parents respond to marketing calls. Some of this data is arguably more intimate than anything a social media platform collects.
The edtech sector's data practices have been questionable for years. Aggressive telemarketing, where salespeople call parents who've shown even mild interest, is powered by data collected from children's interactions with free trial content. A child watches a demo video, enters their name and class, and the next day their parent gets a call from a sales representative who knows the child's grade level, subjects of interest, and school name.
Under the DPDPA, this should require parental consent before any data is collected, and that data should not be used for marketing without separate, specific consent. Whether edtech companies will actually comply — especially the smaller ones operating with thin margins — remains to be seen. The incentive structure pushes hard in the other direction.
The Exemption Loophole
The DPDPA includes a provision that allows the government to exempt certain platforms from the parental consent and age verification requirements if they can show their data processing is "verifiably safe" for children. The criteria for what counts as "verifiably safe" haven't been defined.
This is the part that makes me genuinely most uneasy about the whole framework. In theory, it's sensible — a small, well-intentioned children's educational app that collects minimal data and has no ads shouldn't face the same requirements as a social media platform. But "verifiably safe" is subjective, and the exemption process is controlled by the government, which means it's subject to lobbying and political pressure. It's not hard to imagine a large platform with good government relations getting an exemption that a smaller, possibly better-behaved competitor doesn't.
What Parents Can Actually Do Right Now
The law will catch up eventually. Maybe. But your kid is using their phone right now, not in two years when enforcement might exist. So here's what's actually within your control.
On Android, set up Google Family Link. It lets you approve or block app installations, set screen time limits, see what apps are being used and for how long, and restrict content. It's free and works on any Android phone running 7.0 or later. The controls aren't perfect — a determined teenager will find workarounds — but they add meaningful friction. On iOS, Screen Time serves a similar function, with the added benefit of Apple's stricter app review process.
Go through your child's phone and check app permissions. Does that game need access to the microphone? Does the drawing app need location data? Does the photo editor need access to contacts? Revoke anything that doesn't make sense. On Android 12 and above, the Privacy Dashboard shows you which apps accessed sensitive permissions in the last 24 hours — it's revealing.
Set up CleanBrowsing or NextDNS on your home Wi-Fi. These DNS services can filter out adult content, gambling sites, and known malicious domains at the network level, meaning every device connected to your home network gets the filter automatically. CleanBrowsing's family filter is free. NextDNS gives you granular control over blocking categories.
Talk to your kids about data. I know "talk to your kids" sounds like the kind of advice that gets put on a poster and ignored, but it actually matters. Children who understand that apps collect information about them make different decisions. You don't need to terrify them — just explain that the "free" game isn't really free, that it watches what they do and tells other companies about it, and that some information (their real name, school name, address, photos) should never go into an app without talking to you first.
One specific thing parents can do right now: check whether your child's school is using any edtech platforms and what data those platforms collect. Many Indian schools adopted platforms like Google Classroom, Microsoft Teams, or various edtech apps during the COVID-era shift to online learning and never went back. These platforms collect data on your child — attendance, assignment submissions, browsing within the platform, sometimes even keystroke patterns for proctored exams. Ask the school what data is collected, who has access, and what happens to it when your child moves to the next class or leaves the school. Schools are often surprised by these questions, which tells you how little thought has gone into the answers.
Another step: review the privacy settings on YouTube specifically. If your child is under 13, they should be using YouTube Kids, which has stronger data protections and content filtering. For teenagers, you can set YouTube to "supervised experience" mode through Family Link, which limits content categories and disables some data collection features. It's not airtight, but it's significantly better than an unsupervised YouTube account with full tracking enabled.
Social Media and Teenagers: The Other Battlefield
While much of the DPDPA's children's provisions focus on consent and data collection, there's a broader issue that intersects with privacy but extends beyond it: how social media platforms use data to shape what teenagers see and feel. Instagram's own internal research, leaked in 2021 through the Facebook Papers, showed that the platform was aware that its algorithmic recommendations were harmful to teenage girls' body image and mental health. That research was about American and European teens, but the algorithm doesn't change when it crosses borders. Indian teenagers on Instagram get the same recommendation engine, the same engagement-driven content sorting, the same system that promotes content that generates emotional reactions regardless of whether those reactions are healthy.
The DPDPA's prohibition on "detrimental" data processing could, in theory, apply to algorithmic content recommendation that uses children's data to serve engagement-optimised content. An algorithm that tracks a 14-year-old's interactions to figure out that extreme fitness content keeps them scrolling longer, and then serves more of it, is processing that child's data in a way that arguably causes harm. But nobody has tested this interpretation in front of the Data Protection Board, because the Board isn't operational yet, and even when it is, proving that algorithmic harm constitutes "detrimental" data processing under the DPDPA will be a novel legal argument that could go either way.
YouTube's situation in India is worth noting separately. YouTube is the most-used app among Indian teenagers — more than Instagram, more than WhatsApp in some age groups. YouTube Kids exists as a separate product with stronger data protections, but it's designed for younger children. Teenagers use the main YouTube app, which tracks watch history, search history, and engagement patterns to build a personalised recommendation profile. That profile, under the DPDPA, arguably shouldn't exist without parental consent for users under 18. Whether YouTube has obtained or even attempted to obtain such consent for its Indian teenage user base is something I haven't seen any evidence of.
Gaming platforms occupy a similar grey zone. Roblox, BGMI, Free Fire, and Minecraft all have massive Indian underage user bases. They all collect behavioural data. They all serve ads or sell microtransactions based on that data. The DPDPA says this shouldn't happen without parental consent. The platforms continue as if the law doesn't exist, because, in practical terms, it hasn't been enforced yet.
The Gap Between Law and Reality
India's DPDPA provisions for children are genuinely ambitious. Verifiable parental consent, a ban on behavioural monitoring of minors, a prohibition on harmful data processing — these are stronger protections than what most countries have. If enforced, they'd reshape how every app serving Indian children operates. The emphasis there is on "if enforced."
The Data Protection Board isn't fully operational. The rules defining verification mechanisms haven't been published. No company has been penalised for violating children's data provisions. The platforms know this, and they're behaving accordingly — which is to say, they're not changing much.
I keep thinking about a conversation I had with a mother in Bengaluru who works in tech and knows more about data privacy than most. She'd set up Family Link, configured DNS filtering, reviewed every app on her twelve-year-old son's phone. She'd done everything right. And then her son came home from school one day and told her he'd been using his friend's phone to watch YouTube during lunch breaks — an account with no parental controls, no content filters, nothing. She laughed when she told me, but it was the kind of laugh that sits closer to exhaustion than amusement. "You can control the device," she said. "You can't control every device they'll ever touch." That, in a sentence, is the children's online privacy problem. And no law, however well-written, can fully solve it.
Written by
Priya SharmaSenior Privacy Analyst
Priya Sharma specializes in India's Digital Personal Data Protection Act (DPDPA) and helps organizations comply with data protection regulations. She holds a law degree from NLU Delhi and has published extensively on digital rights in India.
Related Posts
The Privacy Impact of India Stack and Digital Public Infrastructure
India Stack is brilliant engineering. It's also the most extensive personal data infrastructure any democracy has ever built. Holding both of those thoughts at once is where the interesting conversation starts.
How to Use Tor Browser Safely in India
Tor isn't just for hackers or whistleblowers. It's a legitimate privacy tool, it's legal in India, and most people use it wrong. Here's what happened when I started using it properly, and what you should know before you try.
Monthly Privacy Roundup: Key Updates from February 2026
February 2026 was a busy month for privacy in India — a fintech breach exposed 2.3 million records, the Data Protection Board got its full bench, and UPI fraud numbers got worse. Here's what happened.


