Indian Government's New Cybersecurity Guidelines for 2026
India slashed its breach reporting window from 24 hours to 6. That's just one piece of the 2026 cybersecurity guidelines -- quarterly CII audits, AI governance rules, and a tiered compliance system round out the rest.

Seventy-eight percent of Indian organisations experienced at least one cyber incident in 2025, according to CERT-In's own annual report. And somehow, the average time between breach and public disclosure sat at forty-seven days. Not hours. Days.
...which is probably why the new cybersecurity guidelines landed in early January with a noticeably sharper tone than their predecessors. CERT-In's 2022 directives already had teeth -- the 24-hour breach reporting mandate caught plenty of companies off-guard back then. But the 2026 update goes further. Quite a bit further, actually. Six-hour reporting windows. Mandatory zero-trust for banks and hospitals. AI governance requirements that didn't exist twelve months ago. We're looking at a different regulatory posture altogether, and it's worth walking through what changed, who's affected, and what ordinary people should probably pay attention to.
Breach Reporting Just Got a Lot Tighter
Let's start with the headline change. Organisations now have six hours to report a cybersecurity incident to CERT-In after detecting it. Six. Down from twenty-four.
I'll be honest -- most security teams I've spoken with think this is aggressive. Maybe too aggressive. One CISO at a mid-size fintech told me, off the record, that his team sometimes needs six hours just to figure out whether an alert is a genuine breach or a false positive. And now that entire window is supposed to include filing a formal report?
But here's the thing. India isn't inventing this timeline out of thin air. Singapore runs a 72-hour window under PDPA, sure, but South Korea's been pushing for same-day notification, and the EU's NIS2 directive expects "without undue delay" -- which regulators have interpreted as roughly 24 hours for the initial heads-up. India's six-hour demand puts it among the most aggressive reporting regimes on the planet.
What counts as a reportable incident hasn't changed much from 2022. Data breaches, ransomware, unauthorised access to systems processing personal data, compromise of any infrastructure that CERT-In classifies as significant -- it's a broad net. The real shift is speed. You detect something at 2 AM on a Saturday, and your compliance team better have a mechanism for getting that report filed by breakfast.
Penalties got stiffer too. Failure to report within the window can now attract fines up to ₹5 crore for first offences, with repeat violations potentially leading to operational restrictions. That last bit -- operational restrictions -- is new and, frankly, a little alarming for companies that haven't built out their incident response workflows yet.
Critical Information Infrastructure: A Different Standard Entirely
If you work in banking, telecom, power generation, healthcare, or transportation, the 2026 guidelines basically rewrote your compliance playbook. Entities tagged as Critical Information Infrastructure (CII) face requirements that would've seemed excessive even two years ago.
Four big mandates stand out.
Quarterly security audits. Not annual. Quarterly. And they've got to be conducted by auditors on CERT-In's empanelled list -- you can't just hire your cousin's IT consultancy and call it a day. Each audit needs to cover vulnerability assessments, penetration testing, configuration reviews, and compliance gap analysis. That's a significant operational load, especially for hospitals and power utilities that historically haven't invested heavily in cybersecurity staffing.
Zero-trust architecture. CERT-In isn't just name-dropping the concept anymore. CII operators must demonstrate they've implemented zero-trust principles across internal networks. Every access request verified. No implicit trust for any device or user, regardless of whether they're sitting inside the corporate firewall. I've seen some organisations treat "zero-trust" as a marketing slide rather than an actual security model; that approach won't fly under the new framework. Auditors will be checking for micro-segmentation, continuous authentication, and least-privilege access controls.
Offline backups, updated weekly. Ransomware is why. After the AIIMS attack in late 2022 crippled one of India's premier medical institutions for nearly two weeks, the writing was on the wall. CII operators must maintain air-gapped backup systems that get refreshed at minimum once a week. Not cloud snapshots. Not replicated storage. Physically disconnected copies of data that a ransomware worm can't reach through network propagation.
Dedicated Security Operations Centres with round-the-clock staffing. A 24/7 SOC isn't cheap to run. You need analysts on three shifts, SIEM tools, threat intelligence feeds, playbooks, escalation procedures -- the whole apparatus. For large banks and telecom giants, this is probably business as usual. For a mid-tier hospital network or a state-level power distribution company? It's a massive step up. The guidelines do allow for outsourced SOC arrangements, which softens the blow somewhat, but even managed SOC contracts run into the crores annually.
AI Governance: The Part Nobody Saw Coming
Alright, maybe some people saw it coming. India's been making noises about AI regulation since at least 2023. But the fact that AI governance landed inside cybersecurity guidelines -- rather than as a standalone policy -- caught a few industry watchers by surprise.
Here's what the 2026 framework requires for any organisation deploying AI or machine learning systems that process personal data:
Mandatory risk assessments before deployment. You can't just spin up a model and push it to production. Each AI system needs a documented risk assessment covering data handling practices, potential for discriminatory outcomes, failure modes, and exposure to adversarial manipulation. The assessment has to be reviewed and updated whenever the model gets retrained or significantly modified.
Adversarial testing. This one's interesting. Organisations must conduct adversarial testing -- basically, trying to break their own AI systems -- before deployment and at regular intervals afterward. We're talking about prompt injection testing for large language models, data poisoning checks for recommendation engines, evasion attack simulations for computer vision systems. It's a surprisingly specific requirement and suggests that whoever drafted this section actually understood the threat model rather than just dropping buzzwords.
Audit trails for consequential decisions. If an AI system makes or materially influences a decision that affects an individual -- think loan approvals, insurance underwriting, hiring screening, content moderation -- the organisation must maintain a retrievable audit trail. Not just "we logged it somewhere." A structured, queryable record that a regulator or affected individual could request and actually make sense of.
I should note there's some ambiguity in the guidelines about what counts as "processing personal data" in the AI context. Does a model trained on anonymised datasets but deployed in a way that generates personalised outputs qualify? The text seems to suggest yes, but I'd expect litigation and clarification requests to pile up over the next year or so. Probably inevitable with any regulation this new.
Supply Chain Security: Your Vendors Are Your Problem Now
SolarWinds changed everything. Not just in the US -- globally. The 2026 Indian guidelines make it explicit: your cybersecurity posture includes the security of every third-party vendor, software provider, and supply chain partner you work with.
Specifically, organisations must:
- Require vendors to demonstrate adherence to baseline security standards before entering into contracts
- Maintain an updated inventory of all third-party software dependencies, including open-source components
- Conduct periodic security assessments of vendors handling sensitive data or with access to internal systems
- Include cybersecurity compliance clauses in all new vendor agreements
That software inventory requirement is going to hurt. Most companies I've worked with have, at best, a vague awareness of what open-source libraries sit in their production codebase. Software Bill of Materials (SBOM) generation is still relatively uncommon outside of regulated industries in India. And now everyone's supposed to keep one updated? Good luck to the procurement teams figuring out how to audit a vendor's dependency tree when half of those dependencies are maintained by anonymous contributors on GitHub.
Still, the intent is sound. Supply chain attacks accounted for roughly 15% of major cyber incidents in India during 2025, up from around 8% in 2023. Ignoring the problem wasn't working.
Tiered Compliance: One Size Doesn't Fit All (Finally)
Maybe the most practical decision in the entire document is the tiered compliance framework. Previous iterations of CERT-In's guidelines applied more or less uniformly, which meant a five-person startup theoretically had the same obligations as TCS. That was never realistic, and everyone knew it.
The 2026 guidelines split organisations into three tiers:
Tier 1 -- Large enterprises. Companies with annual revenue above ₹500 crore or processing data of more than 10 lakh individuals. Full compliance. Every requirement applies. SOCs, zero-trust, quarterly audits, AI governance, supply chain vetting -- the works. No phase-in period; these organisations were expected to begin compliance immediately upon publication.
Tier 2 -- Mid-size businesses. Revenue between ₹50 crore and ₹500 crore, or processing data of 1 lakh to 10 lakh individuals. Core security controls apply -- encryption standards, access management, incident reporting. Annual audits instead of quarterly. They get a 12-month transition period for the more demanding requirements. AI governance rules apply only if they're actively deploying AI systems, which is a reasonable carve-out.
Tier 3 -- Small businesses. Revenue below ₹50 crore, processing data of fewer than 1 lakh individuals. Baseline security hygiene -- strong passwords, updated software, basic access controls, simplified incident reporting. Annual self-assessments rather than formal audits. The guidelines specifically note that Tier 3 organisations can use "lightweight, cost-effective security solutions" and aren't expected to build dedicated security teams.
Honestly? This is a decent structure. It won't satisfy everyone -- some mid-size companies will argue they should be Tier 3, and some large enterprises will grumble about the quarterly audit cadence -- but the general approach of scaling expectations to capacity makes sense. You can't regulate a chai shop's website the same way you regulate HDFC Bank's core banking platform.
Enforcement: How Serious Are They?
Regulations without teeth are just suggestions. So how does enforcement work?
CERT-In's authority to conduct audits and investigations was already established, but the 2026 guidelines expand it. They can now issue binding directions to any organisation -- not just CII operators -- requiring specific remediation actions within defined timescales. Failure to comply with a binding direction carries penalties that scale based on tier classification and the severity of the deficiency.
There's also a new mechanism for "cybersecurity compliance certificates." Organisations in Tier 1 and Tier 2 must obtain these certificates annually, demonstrating they've met the applicable requirements. Think of it like a PCI-DSS attestation, but broader in scope and issued under Indian regulatory authority. The certificate needs to be signed off by both the organisation's designated CISO (or equivalent) and an empanelled auditor.
Will enforcement be consistent? Hard to say. CERT-In doesn't have unlimited staff, and the sheer number of organisations that fall under these guidelines is enormous. Early enforcement will likely focus on CII operators and Tier 1 companies, with Tier 2 and 3 receiving more attention as CERT-In scales up its audit capacity. I wouldn't be surprised if the first couple of years involve a mix of genuine penalties and a lot of strongly-worded warning letters.
What Regular People Should Know
Most of this is written in the language of corporate compliance, and if you're not a CISO or a lawyer, your eyes probably glazed over around paragraph three. Fair enough. But these guidelines do matter if you're just a regular person using apps, banking online, or handing your Aadhaar number to various service providers.
Here's the practical version:
You'll hear about breaches faster. When a company loses your data, they now have six hours to tell CERT-In. CERT-In then has its own processes for public notification. Will you hear within the same day? Maybe not. But the cascade starts much sooner than before, which means you'll learn about compromised passwords and leaked personal data days earlier than you would've under the old rules. Earlier notification means earlier action -- changing passwords, freezing accounts, whatever's appropriate.
Your bank's and hospital's defences are getting an upgrade. Mandatory quarterly audits, zero-trust networks, offline backups -- these requirements directly reduce the chance that a ransomware gang holds your medical records hostage or that a banking breach exposes your account details. We're not talking about theoretical improvements; these are specific, measurable security controls with audit checkpoints.
AI systems can't operate in a black box anymore. If an algorithm denies your loan application or flags your insurance claim, the organisation behind it must maintain records of how that decision was made. You've got a better shot at contesting an unfair automated decision when there's an actual audit trail instead of just "the algorithm said so."
Companies must actually vet the tools they use. That food delivery app you love? If it's using a third-party payment processor or a cloud analytics service that has terrible security practices, the app company is now on the hook for that. Supply chain accountability means your data is only as safe as the weakest link -- and now there's regulatory pressure to strengthen every link.
Honest Gaps and Criticisms
No regulation is perfect, and pretending otherwise doesn't help anyone. A few concerns worth flagging:
Six hours might be unrealistic for complex incidents. A sophisticated nation-state attack can take weeks to fully scope. Requiring a report within six hours of "detection" creates pressure to file incomplete or speculative reports. Some organisations may end up over-reporting minor security events just to stay safe, which could flood CERT-In with noise and make it harder to spot genuine catastrophic breaches.
Small businesses face an awareness gap. Tier 3's requirements are lighter, sure, but many small Indian businesses don't have anyone on staff who understands what "baseline security hygiene" means in practice. Without significant investment in awareness programmes and accessible guidance materials, the Tier 3 framework risks becoming a box-ticking exercise that doesn't actually improve security on the ground.
AI governance definitions need work. As I mentioned, the boundary between "AI that processes personal data" and "AI that doesn't" is blurry. What about recommendation systems trained on aggregate behavioural data? What about models that don't store personal data but could theoretically be reverse-engineered to reveal it? These edge cases will need clarification, probably through addenda or judicial interpretation over time.
Enforcement capacity is the elephant in the room. India has hundreds of thousands of businesses that'll fall under some tier of this framework. CERT-In's auditing capacity, even with empanelled third-party auditors, has limits. Selective enforcement is almost guaranteed in the near term, which creates an uneven playing field between companies that voluntarily comply and those that gamble on not being checked.
How India Stacks Up Against Other Countries
India's approach is interesting because it tries to be both aggressive and pragmatic at the same time. The EU's NIS2 directive covers similar ground but applies primarily to "entities of high criticality" -- it doesn't really have a Tier 3 equivalent that reaches small businesses. Singapore's Cybersecurity Act focuses heavily on CII operators and doesn't wade into AI governance at all yet. Japan's approach through NISC is more guidance-oriented and less penalty-driven.
On breach reporting timelines, India's now among the strictest globally. The US doesn't even have a unified federal breach notification law -- it's a patchwork of state regulations, most allowing 30 to 60 days. Australia requires notification "as soon as practicable" after becoming aware of a breach, with no hard hourly deadline.
Where India arguably leads is in combining cybersecurity and AI governance within a single framework. Most countries are treating these as separate regulatory tracks. There's a logic to bundling them -- AI security is cybersecurity, after all -- but it also means India's guidelines are trying to cover an enormous amount of ground in one document. Whether that ambition translates into effective implementation is another question entirely.
Preparing for Compliance: Practical Steps
For businesses reading this and feeling slightly panicked, a few concrete things to prioritise:
Figure out your tier first. Everything flows from that classification. Revenue figures and data processing volumes determine which requirements apply to you. Get this wrong and you'll either over-invest in compliance you don't need or under-invest and face penalties down the road.
Build or fix your incident response plan. Six-hour reporting requires a rehearsed, documented process. Who gets called at 2 AM? What information needs to go in the initial report? Where does that report get filed? If you can't answer these questions right now, start there.
Audit your vendor relationships. Make a list of every third-party service that touches your data or connects to your systems. Check their security certifications, ask about their own compliance posture, and update your contracts to include cybersecurity clauses. Yes, this is tedious. Do it anyway.
If you're deploying AI, document everything. Risk assessments, testing results, decision logs -- start building that paper trail now. Retroactively documenting AI systems is exponentially harder than documenting them as you build and deploy.
Talk to your auditor early. Empanelled auditors are going to be in high demand over the coming months. If you're Tier 1 and need quarterly audits, secure your auditing relationship sooner rather than later. Waiting until Q3 to start looking for an available auditor is a recipe for non-compliance.
Looking Ahead
India's cybersecurity regulatory posture has shifted noticeably over the past four years, from the 2022 CERT-In directives to DPDPA in 2023 to these 2026 guidelines. Each iteration gets more specific, more demanding, and harder to ignore. Whether the enforcement machinery can keep pace with the regulatory ambition -- well, we'll see. That's always been the question with Indian tech regulation, hasn't it? Great frameworks on paper, inconsistent follow-through on the ground.
But the direction is clear. Faster reporting. Harder audits. AI accountability. Vendor responsibility. Every cycle ratchets the baseline upward. Companies that treat compliance as a one-time project rather than an ongoing practice will find themselves perpetually scrambling to catch up. And ordinary citizens, whether they read regulatory documents or not, will gradually feel the effects -- in slightly faster breach notifications, in somewhat more accountable AI systems, in marginally better-protected personal data.
It's not a revolution. It's a slow, grinding improvement. And for Indian cybersecurity, that might be exactly what's needed.
Written by
Priya SharmaSenior Privacy Analyst
Priya Sharma specializes in India's Digital Personal Data Protection Act (DPDPA) and helps organizations comply with data protection regulations. She holds a law degree from NLU Delhi and has published extensively on digital rights in India.
Related Posts
The Privacy Impact of India Stack and Digital Public Infrastructure
India Stack is brilliant engineering. It's also the most extensive personal data infrastructure any democracy has ever built. Holding both of those thoughts at once is where the interesting conversation starts.
How India's Data Protection Board Works
Most people assume India's Data Protection Board will function like a court. It won't. Here's a thoughtful breakdown of how the DPBI actually operates, what it can do, and the quiet structural problems no one's talking about.
Data Protection Officer: A Growing Career Opportunity in India
What exactly does a Data Protection Officer do all day, and why are Indian companies suddenly willing to pay lakhs for someone to fill the role? The DPDPA has created a career that barely existed here three years ago. Here's what the job looks like, what it pays, and how to break in.


