Hackathon on Cybersecurity & AI Safety 2025–26

Hackathon on Cybersecurity & AI Safety 2025–26

March 26 - 28, 2026 | ISB, Mohali Campus
Share:

Hackathon on Cybersecurity & AI Safety 2025–26

As India accelerates its digital transformation agenda through initiatives such as Digital India, IndiaAI Mission, the DPDP Act 2023, National Cyber Security Strategy (Draft), and the expanding use of AI and ML across BFSI, governance, healthcare, telecom, and public platforms, the risks associated with cyber threats and unsafe AI systems have intensified.

Cybercrime in India has grown at a compound rate of over 40% annually, with increasing sophistication in ransomware, phishing-as-a-service, synthetic identities, crypto fraud, and deepfake-enabled scams. Law enforcement agencies, industry leaders, and policymakers consistently highlight the need for specialised talent capable of responding to next-generation cyber threats.

Simultaneously, rapid advances in Generative AI—large language models (LLMs), diffusion models, agentic AI, and autonomous systems—have triggered new categories of risks:

1. Hallucinations and misinformation at scale

2. Prompt injection and jailbreaking

3. Model exfiltration

4. Data poisoning

5. Deepfakes and impersonation attacks

6. Misuse of autonomous agentic systems

7. Lack of model interpretability and accountability

These challenges demand a new generation of researchers, cybersecurity professionals, and developers who can build secure, transparent, reliable, and responsible AI systems.

To address this national priority, we propose organising a Hackathon on Cybersecurity & AI Safety, bringing together innovators from across India to design, build, and demonstrate solutions that address real-world cyber threats and emerging AI safety challenges.

Hackathon Finale Date:
Location:
  • Identify, train, and empower emerging cybersecurity and AI safety talent.
  • Build practical skills in network defence, digital forensics, LLM safety, and adversarial ML.
  • Promote safe-by-design principles in AI development.
  • Raise awareness about risks associated with powerful AI systems.
  • Incident response innovations for CERT-In / State CERTs.
  • AI for law enforcement and cybercrime investigation.
  • Detection of malicious content such as deepfakes, misinformation, and synthetic fraud.
  • Secure data pipelines for banks, fintech’s, and digital public infrastructure (UPI, Aadhaar).
  • Build bridges between academia, industry, startups, cyber units, CERTs, policymakers, and think tanks.
  • Enable field-driven innovation that can be escalated to deployments.
  • Solutions presented during the hackathon can inform national frameworks on AI safety and cybersecurity.
  • Generate open-source tools and datasets useful for the wider ecosystem.

Scope and Thematic Tracks

Cybersecurity Tracks

Track 1: Threat Intelligence and Malware Analysis

Participants build:

  • Automated malware analysis sandboxes
  • Behavioural malware classifiers
  • Threat intelligence dashboards

Problem examples:

  • Detecting polymorphic malware
  • Reverse engineering malicious scripts
  • Identifying C2 communications


Track 2: Digital Forensics

Participants work on:

  • Mobile forensics
  • Cryptocurrency tracing
  • Disk/memory analysis

Problem examples:

  • Recovering deleted artefacts
  • Investigating cross-border fraud

This track is especially relevant for law enforcement teams.


Track 3: Cyber Fraud Detection

 

Focus on BFSI, telecom, and fintech fraud.

Problems include:

  • UPI fraud detection
  • SIM-swap detection
  • Mule account identification
  • Social engineering pattern mining

 

Track 4: LLM Safety, Robustness and Red-Teaming

 

Participants test and secure LLMs against:

  • Prompt injection
  • Jailbreaking
  • Model exfiltration
  • Hallucinations

Solutions may include:

  • Safety guardrails
  • Policy-based moderation
  • Evaluators for toxicity, bias, misinformation 

 

Track 5: Deepfake and Synthetic Media Detection

Build tools to detect:

  • Audio deepfakes
  • Video manipulation
  • Face-swaps
  • Synthetic identity images

This is critical for prevention of misinformation and political manipulation.

 

Track 6: Autonomous Agent Safety

Focus on safety for agentic AI systems.

Challenges include:

  • Preventing goal drift
  • Ethical guardrails
  • Agent alignment testing
  • Safe task execution

Cybersecurity and AI safety are no longer separate disciplines; they are two sides of the same national challenge. As AI systems become more autonomous, complex, and integrated into governance, finance, and daily life, India must develop talent and technology that ensure these systems are secure, trustworthy, and aligned with public interest.

The Cybersecurity & AI Safety Hackathon aims to:

  • Discover exceptional talent
  • Build innovative tools
  • Strengthen India’s cyber defence
  • Advance safe and responsible AI
  • Foster meaningful collaboration
  • Contribute to national digital resilience

This hackathon will serve as a flagship platform to bring together the brightest minds from across the country and empower them to create solutions that shape India’s secure digital future.

Tentative Timelines

Registration Deadline March 15, 2026
Deadline to submit the solutions March 20, 2026
Finale March 26 - 28, 2026

Venue

Knowledge City, Sector 81, SAS Nagar, Mohali, Punjab 
- 140 306