What Is GAN Signature Analysis? India's NCRB & IITs Are Using It to Catch Deepfake Criminals | Budding Forensic Expert

Budding Forensic Expert
0

  Digital Forensics  |    March 5, 2026  |    12 min read

What Is GAN Signature Analysis? India's NCRB & IITs Are Using It to Catch Deepfake Criminals

  Forensic Alert

India has witnessed a 550% surge in deepfake-related cybercrime since 2019, with projected losses reaching ₹70,000 crore. In response, the National Crime Records Bureau (NCRB) has partnered with IIT Delhi and IIT Hyderabad to deploy GAN Signature Analysis — a cutting-edge forensic method that detects the invisible 'fingerprints' left by AI-generated deepfakes. This article breaks it all down for forensic science students and professionals.

1. Introduction: The Deepfake Epidemic in India

Imagine receiving a video call from your bank manager asking you to urgently transfer funds. His face, voice, mannerisms — everything looks real. But it is not him. It is a deepfake, a synthetic video created by Artificial Intelligence so convincing that even trained professionals can be deceived.

This is not a hypothetical scenario. In June 2025, a 79-year-old woman in Bengaluru lost ₹35 lakhs after watching deepfaked videos of N.R. Narayana Murthy — the Infosys co-founder — promoting a bogus trading platform. In 2023, actor Rashmika Mandanna's face was superimposed onto another woman's body in an explicit video that went viral across WhatsApp and Telegram before police intervened. During India's 2024 general elections, synthetic audio clips appeared showing Congress MP Manish Tewari making inflammatory speeches in Haryanvi — a language he does not speak.

India's deepfake problem has grown from a novelty to a national security threat. The country now ranks sixth globally in vulnerability to deepfake adult content, while the financial sector, political landscape, and judicial system face mounting pressure to respond. Traditional methods of digital verification — visual inspection, metadata analysis — are no longer sufficient.

Enter GAN Signature Analysis: a forensic breakthrough that identifies the invisible mathematical markers left by AI generation engines inside every deepfake image or video.



2. Understanding the Numbers: India's Deepfake Crisis by the Data

Before exploring the forensic solution, it is essential to understand the scale of the problem:

Statistic Figure Source
Increase in deepfake cybercrime cases since 2019 550% Pi-Labs Report, 2024
Projected deepfake financial losses in India by 2025 ₹70,000 Crore Pi-Labs Report, 2024
Indians who have encountered deepfake content 1 in 4 IAMAI Survey, 2024
Spoofing rate in India's daily video KYC calls 86% Industry estimate, 2024
Cybercrime cases reported in India (2023) 86,420+ NCRB Annual Report, 2023
Financial fraud incidents reported to I4C (2024) 36 lakh+ MHA / I4C, 2024

According to the National Crime Records Bureau (NCRB), cybercrime in India jumped from 52,974 cases in 2021 to over 86,420 in 2023 — a 63% rise — and preliminary data from I4C (Indian Cybercrime Coordination Centre) shows over 36 lakh financial fraud incidents in 2024 alone, up nearly 49% from the prior year.

The Reserve Bank of India's Financial Stability Report (December 2024) explicitly flagged deepfakes and AI-driven phishing as major systemic risks to India's financial ecosystem. CERT-In published a dedicated advisory on deepfake threats in November 2024. The crisis is real, rapidly evolving, and demands forensic-level countermeasures.

3. What Is a GAN? The Technology Behind Deepfakes

3.1 The Generator-Discriminator Architecture

To understand GAN Signature Analysis, one must first understand what a GAN is. A Generative Adversarial Network (GAN) is a type of artificial intelligence framework consisting of two competing neural networks:

  • The Generator: Creates synthetic images, videos, or audio that look as real as possible.
  • The Discriminator: Evaluates the output of the Generator and attempts to detect whether the content is real or fake.

These two networks are locked in a perpetual adversarial game. The Generator tries to fool the Discriminator; the Discriminator tries to catch the Generator. Over millions of training cycles, the Generator becomes extraordinarily good at producing content that appears authentic to human observers.

Modern GAN variants like StyleGAN, ProGAN, StarGAN, and CycleGAN can produce photorealistic human faces that do not belong to any real person, swap faces between videos with surgical precision, clone voices, and synthesize facial expressions frame-by-frame to create lip-synced speeches that never happened.

3.2 How Deepfakes Are Made

The deepfake creation pipeline typically involves four steps:

  • Data Collection: Hundreds or thousands of images or video frames of the target person are collected from public sources such as social media, news footage, or films.
  • Model Training: A GAN is trained on this data, learning the precise facial structure, skin texture, lighting responses, and expressions of the target.
  • Face Synthesis / Swap: The trained Generator produces a new face or overlays the target's face onto another body in a video.
  • Post-Processing: Blending artifacts are smoothed, lighting is adjusted, and lip synchronization is refined to make the content indistinguishable from authentic recordings.

The entire process, which once required a supercomputer and weeks of work, can now be accomplished with a consumer-grade laptop and over 50 commercially available deepfake apps within minutes.



4. GAN Signature Analysis: The Forensic Counter-Strike

4.1 The Core Principle: GANs Leave Fingerprints

Here is the fundamental forensic insight that makes GAN Signature Analysis possible: every AI generation engine, regardless of how sophisticated, leaves involuntary 'fingerprints' inside the content it creates. These signatures are not visible to the human eye, but they are consistently present and detectable through computational analysis.

  Forensic Concept

Just as every firearm leaves unique toolmarks on a bullet casing, every GAN model leaves unique mathematical artifacts inside the images or videos it generates. These are called GAN Fingerprints or GAN Signatures.

According to peer-reviewed research published in Applied Sciences (2025), the detection of fake images generated by GANs fundamentally relies on identifying these synthetic signatures. While some artifacts are visible to trained observers, the truly forensically valuable ones are invisible markers that require specialized digital analysis techniques to detect.

4.2 Where GAN Signatures Come From: The Upsampling Artifact

The primary source of GAN fingerprints is a mathematical process called upsampling — the mechanism by which GANs convert low-resolution noise into high-resolution realistic images. During upsampling, the generator applies convolutional operations and deconvolutional layers across multiple passes. These operations introduce characteristic periodic patterns into the frequency domain of the generated image.

Research from the University of Bonn and multiple institutions has demonstrated that GAN-generated images consistently fail to reproduce the spectral distributions of real photographs. Specifically:

  • Real photographs captured by cameras contain noise patterns unique to the camera sensor — similar to human fingerprints.
  • GAN-generated images contain quasi-periodic spectral artifacts in the high-frequency domain that arise directly from the upsampling architecture.
  • These artifacts vary between different GAN architectures (e.g., ProGAN produces different frequency artifacts than StyleGAN2 or StarGAN), allowing forensic analysis not only to detect a deepfake but also to identify which specific AI model generated it.

A 2025 study on Fourier-Based GAN Fingerprint Detection demonstrated that by applying a 2D Discrete Fourier Transform (DFT) to images and training a ResNet50 classifier on the transformed data, detection accuracy reached 92.8% with an AUC of 0.95 — far outperforming models that operated only on raw pixel data.

4.3 The Taxonomy of GAN Artifacts

Forensic researchers have developed a comprehensive taxonomy of GAN residues, categorizing them into visible and invisible types:

Artifact Type Visibility Examples Forensic Use
Visual Artifacts Human-detectable Unnatural eye reflections, blurred hair, asymmetric face, lighting mismatch First-pass visual screening
GAN Fingerprints (Spectral) Invisible Unique frequency-domain patterns from upsampling architecture Model attribution, deepfake confirmation
Spatial Domain Features Invisible Pixel saturation anomalies, color histogram irregularities in YCbCr / HSV space Statistical forgery detection
Frequency Space Features Invisible Checkerboard patterns in DCT spectrum, periodic grid artifacts, anomalous power spectrum Architecture-level identification

5. India's Response: NCRB + IIT Delhi + IIT Hyderabad

5.1 The Collaboration

According to a research paper published in the International Journal for Multidisciplinary Research (IJFMR, 2025), the National Crime Records Bureau (NCRB) — India's apex crime statistics and forensic coordination body — is actively collaborating with the Indian Institutes of Technology in Delhi and Hyderabad to develop AI detection systems that employ GAN Signature Analysis to locate the synthetic origin of deepfake content.

  India-Specific Development

NCRB, IIT Delhi, and IIT Hyderabad are jointly developing AI-powered detection systems using GAN signature analysis specifically calibrated for Indian linguistic and demographic contexts. These systems are currently being introduced in select cybercrime units across India for testing and validation.

This initiative is part of a broader ecosystem of institutional responses. The Ministry of Home Affairs established the Indian Cyber Crime Coordination Centre (I4C) and has provided forensic infrastructure support to 20 states and UTs under the Nirbhaya-funded scheme. The CyMAC (Cyber Multi Agency Centre) was formed on 22 January 2025 to address cybersecurity threats and misuse of emerging technologies at the national level.

The collaboration between NCRB and the IITs represents India's most technically advanced response to the deepfake challenge, bringing together law enforcement data, academic research capability, and forensic deployment expertise under a single framework.

5.2 What These Systems Are Designed to Do

The NCRB-IIT GAN Signature Analysis systems are designed to perform several functions critical to criminal investigation:

  • Deepfake vs. Authentic Classification: Binary determination of whether a piece of digital media is AI-generated or authentic. This is the foundational capability required before any criminal prosecution can proceed.
  • Model Attribution: Identification of which specific GAN model or deepfake application was used to create the content. This is analogous to identifying the specific firearm used in a crime — it narrows the investigative field and links suspects to specific tools.
  • Temporal Origin Analysis: Estimation of when the deepfake was created based on the generation artifacts, which can help establish timelines critical to alibis and criminal chronologies.
  • Cross-Platform Detection: Identification of GAN signatures that survive social media compression, video encoding, and screenshot processing — the real-world conditions under which digital evidence is collected.
  • Real-Time Monitoring Integration: Long-term deployment goal of integrating detection into cybercrime reporting portals and law enforcement digital evidence collection pipelines.

5.3 Technical Architecture of the Detection System

While full technical specifications of the NCRB-IIT systems remain confidential for security reasons, the detection architecture is based on established research principles:

  • Step 1 — Preprocessing: The suspicious image or video frame is extracted and pre-processed. Compression artifacts from WhatsApp, Telegram, or social media platforms are compensated for using noise-removal algorithms.
  • Step 2 — Frequency Domain Transformation: A 2D Discrete Fourier Transform (DFT) or Discrete Cosine Transform (DCT) is applied to convert the image from the spatial domain (pixels) into the frequency domain, where GAN artifacts become statistically detectable.
  • Step 3 — GAN Fingerprint Extraction: Specialized neural network architectures — trained on datasets of known GAN outputs and authentic Indian media — extract the unique spectral signatures from the frequency representation.
  • Step 4 — Multi-Modal Analysis: For video deepfakes, temporal inconsistency analysis is added: GAN-generated videos often fail to maintain consistent facial expressions and lighting across adjacent frames, creating detectable temporal artifacts.
  • Step 5 — Probabilistic Report Generation: The system outputs a forensic confidence score, the most likely source GAN architecture, and supporting visualization data that can be presented as expert evidence in court.


6. Other Indian Institutions in the Fight Against Deepfakes

6.1 IIT Bombay

Professor Anurag Mehra and researchers at IIT Bombay have been publicly vocal about India's deepfake vulnerability and the need for accessible detection tools. Their research focuses on making detection algorithms lightweight enough for deployment on mobile devices — critical for a country where 886 million people access the internet primarily through smartphones.

6.2 CERT-In (Indian Computer Emergency Response Team)

CERT-In published a formal advisory in November 2024 detailing deepfake threats and countermeasures. The advisory represents the first formal government guidance document specifically addressing AI-generated synthetic media detection protocols for Indian organizations.

6.3 I4C — India AI Cyber Guard Hackathon

In partnership with India AI, I4C launched the India AI Cyber Guard AI Hackathon, inviting developers and researchers to build AI-powered systems for automatic classification of cybercrime incidents including deepfake-related fraud. This initiative directly feeds innovative research into operational deployment pipelines.

6.4 Election Commission of India

The Election Commission of India issued a directive in early 2025 mandating that all political campaign material generated using AI must carry a visible 'AI-Generated' label — a policy response that presupposes the ability to detect synthetic content reliably.

7. How GAN Signature Analysis Differs from Conventional Digital Forensics

Traditional digital forensics relied on metadata examination (EXIF data, creation timestamps, GPS coordinates), hash verification (MD5, SHA-256 checksums), and visual analysis of pixel-level manipulations. These methods are fundamentally inadequate against deepfakes because:

  • AI-generated deepfakes contain no authentic metadata — they are generated fresh from noise, not captured by a camera.
  • Hash verification cannot detect content manipulation when the entire content is synthetic.
  • Visual analysis by human experts fails: deepfakes produced by modern GAN systems are indistinguishable to human observers in controlled studies.

GAN Signature Analysis operates at a fundamentally different level — it does not ask 'Has this image been edited?' but rather 'Does this image contain the statistical properties of a camera-captured photograph, or the mathematical artifacts of a GAN generation process?'

Parameter Traditional Digital Forensics GAN Signature Analysis
Method Metadata, hash, visual inspection Frequency-domain spectral analysis + deep learning
Effective against deepfakes? No — metadata absent, visual detection fails Yes — detects invisible mathematical artifacts
Model attribution Not possible Identifies specific GAN used (ProGAN, StyleGAN, etc.)
Accuracy (current research) ~50–60% (near random for modern deepfakes) 92–97% (MDPI, Springer, 2025 studies)
Court admissibility Established precedent Emerging — actively being legislated in India
Survives compression? Partially Active research — Lite-CNN models show promise

This is analogous to the evolution of forensic ballistics: early ballistics relied on visual matching of bullet deformations; modern ballistics uses 3D microscopic imaging and computerized pattern matching. GAN Signature Analysis is the forensic ballistics of the digital age.

8. Major Deepfake Cases in India: Why This Matters

8.1 The Rashmika Mandanna Case (2023)

An explicit deepfake video using the actress's face was created by superimposing it over British influencer Zara Patel's original video. Delhi Police traced the perpetrators and made arrests. The case prompted Rashmika's appointment as National Cyber Safety Ambassador by I4C. It was the watershed moment that forced India to take deepfakes seriously at the policy level.

8.2 The Sachin Tendulkar Investment Fraud

AI-generated video and audio of the cricket legend was used to promote a fraudulent online gaming platform, falsely claiming his daughter earned ₹1.6 lakh per day from predictions. The video went viral before he could publicly deny it. The scale of reach demonstrated how deepfakes can cause financial harm at a national level before detection and takedown.

8.3 RBI Governor Shaktikanta Das

Deepfake videos of the Reserve Bank of India Governor were used to promote fake investment schemes. The RBI was forced to issue public cautionary statements. This case illustrated deepfakes' threat to financial system credibility — when central bank governors can be convincingly faked, public trust in institutions is directly undermined.

8.4 The 2024 Election Deepfake Campaigns

During India's 2024 general elections, synthetic audio and video emerged as tools of political manipulation across multiple states. A video showed a Congress MP making incendiary speeches in Haryanvi, a language he cannot speak. Earlier, BJP acknowledged using AI-generated videos of Manoj Tiwari during Delhi elections. The Election Commission recognized that deepfake detection capacity would be essential for future election integrity.

8.5 The Bengaluru Senior Citizen Case (2025)

A 79-year-old woman lost ₹35 lakhs to a scam using deepfaked videos of Narayana Murthy. This case, reported in June 2025, demonstrated that deepfake targeting is increasingly moving beyond celebrities and politicians to ordinary citizens, particularly older and digitally less literate populations.



9. Legal Framework: Can GAN Signature Analysis Evidence Stand in Court?

9.1 Current Indian Legal Provisions

India currently addresses deepfake crimes through a patchwork of existing legislation rather than a dedicated deepfake law:

  • Section 66C (IT Act): Identity theft — applicable when a deepfake is used to impersonate another person.
  • Section 66D (IT Act): Cheating by personation using computer resources — directly applicable to video-call deepfake fraud.
  • Section 66E (IT Act): Violation of privacy — applicable to non-consensual explicit deepfake content.
  • Sections 67 and 67A (IT Act): Publishing or transmitting obscene or sexually explicit material electronically.
  • Bharatiya Nyaya Sanhita (BNS) 2023, Section 111: Covers organised cybercrimes, under which deepfake fraud rings can be prosecuted.

9.2 Judicial Recognition

In Re: AI-Generated Content and Social Media Regulation (2024), India's Supreme Court issued notices to the Union Government and MeitY on a PIL seeking judicial clarity on whether AI-generated misinformation influences criminal proceedings and elections. The Court acknowledged that unverified AI-generated videos could 'undermine public faith in the administration of justice and democratic discourse' — marking the judiciary's first formal recognition of deepfakes as a constitutional concern.

The Delhi High Court, in Anil Kapoor v. Simply Life India and Ors and the Ankur Warikoo v. John Doe cases (2024), applied existing law to celebrity deepfakes and established important precedents for personality rights protection from AI abuse.

9.3 The Evidentiary Challenge

For GAN Signature Analysis to be accepted as forensic evidence in Indian courts, several conditions must be met:

  • The methodology must be grounded in peer-reviewed scientific literature (Daubert / Frye-equivalent standard).
  • The detection system must have established error rates and validation on diverse datasets.
  • Expert witnesses must be qualified to explain the technical methodology to judges and juries in accessible terms.
  • Chain of custody for digital evidence must be maintained rigorously.

The Law Commission of India's Report on Digital Evidence and AI (2024) has recommended updating Section 65B of the Indian Evidence Act (now Bharatiya Sakshya Adhiniyam) to address AI-generated content specifically. This legal evolution, combined with the technical advancement of GAN Signature Analysis, is creating the conditions for deepfake forensics to achieve legal admissibility in Indian courts.

10. The Arms Race: Limitations and Challenges

No forensic methodology is without limitations, and GAN Signature Analysis is engaged in a continuous technical arms race:

10.1 Anti-Forensic Attacks

Researchers have demonstrated that GAN fingerprints can be partially suppressed using targeted frequency-domain manipulation techniques. Methods such as the Mean-Spectrum Attack, Frequency-Peaks Attack, and regression-weights manipulation can reduce GAN artifact detectability. Advanced criminals who are aware of forensic detection methods may apply these countermeasures.

10.2 Cross-Dataset Generalization

A detection model trained on Western-demographic GAN datasets (FaceForensics++, DFDC, Celeb-DF) may not generalize effectively to deepfakes of South Asian faces. This is a specific challenge for India: the NCRB-IIT collaboration's most critical contribution is building Indian-demographic training datasets that ensure detection accuracy across diverse ethnic and regional appearances.

10.3 Diffusion Model Deepfakes

The newest generation of AI-generated content uses diffusion models (DALL-E, Stable Diffusion, Midjourney) rather than GANs. Diffusion models generate different types of artifacts and may partially evade GAN-specific detection methods. Detection research is rapidly expanding to cover diffusion model signatures, but this represents an evolving challenge.

10.4 Social Media Compression

When deepfakes are shared via WhatsApp, Instagram, or Telegram, platforms apply lossy compression that can degrade or partially obscure GAN fingerprints. Robust detection systems must be trained to identify signatures that survive multiple rounds of platform compression — a specific technical challenge that the NCRB-IIT teams are working to address.

10.5 The Real-Time Gap

Current GAN Signature Analysis typically requires forensic-grade computational infrastructure. The goal of real-time, on-device detection for law enforcement field units remains work in progress. A 2025 Springer paper demonstrated that lightweight CNN architectures (Lite-CNN) can achieve 95% detection accuracy suitable for mobile forensics deployment — a promising development for field application.

11. Global Context: What Other Countries Are Doing

  • United States: The FBI's Internet Crime Center has formally categorized deepfake fraud as a major threat. Several states (Texas, California, Virginia) have criminalized deepfake pornography and election-related deepfakes. MIT, Stanford, and DARPA are funding deepfake detection research through the Media Forensics (MediFor) program.
  • European Union: The EU AI Act (2024) mandates transparency labelling for AI-generated content and requires platform providers to implement technical detection capabilities. Europol has published threat assessments on deepfake use by organized crime.
  • United Kingdom: The Online Safety Act (2023) criminalizes sharing non-consensual intimate deepfakes. A Lords' Science Committee report warned that UK digital forensics is at 'breaking point' with over 20,000 devices backlogged — a cautionary tale for India's own infrastructure planning.
  • China: The Cyberspace Administration of China mandated in 2022 that all deepfake content must carry visible AI-generated labels. Detection infrastructure has been integrated into the national cybersecurity framework.

India's NCRB-IIT collaboration places it alongside the most technologically advanced national responses globally, with the added advantage of being specifically calibrated for Indian legal, linguistic, and demographic contexts.

12. Implications for Forensic Science Students and Practitioners

12.1 Career Opportunities in AI Forensics

GAN Signature Analysis represents a rapidly emerging specialization within digital forensics. The following roles are expected to grow significantly over the next decade:

  • AI Forensics Analyst: Specialist in analyzing GAN-generated content for law enforcement agencies, private investigation firms, and media organizations.
  • Deepfake Detection Engineer: Technical role designing and maintaining automated detection systems for cybercrime units, social media platforms, and financial institutions.
  • Expert Witness — AI Media Forensics: Providing court testimony on the authenticity of digital media evidence, requiring both technical expertise and legal communication skills.
  • Digital Evidence Specialist: Integrating AI forensics into broader digital crime investigation workflows.

12.2 Relevant Skills to Develop

Students preparing for careers at the intersection of forensics and AI should develop competency in:

  • Signal processing fundamentals: Fourier transforms, DCT, frequency domain analysis
  • Deep learning and neural network architectures (CNN, GAN, Transformer)
  • Digital image and video forensics methodologies
  • Indian legal framework: IT Act, BSA 2023, cybercrime procedures
  • Forensic report writing for judicial audiences
  • Evidence chain-of-custody protocols for digital media

12.3 Relevance to UGC-NET Forensic Science Examination

GAN Signature Analysis intersects multiple UGC-NET Forensic Science units:

  • Unit IV — Documents: Concepts of document forgery and verification extend naturally to digital forgery and AI-generated document detection.
  • Unit VII — Serology and Biology: Pattern recognition methodologies used in biological forensics parallel those used in GAN artifact detection.
  • Unit VIII — Forensic Physics: Spectroscopic and frequency analysis techniques used in physical forensics have direct analogues in frequency-domain GAN detection.
  • Unit XII — Questioned Documents and Computer Forensics: This unit directly encompasses digital evidence authentication, where deepfake detection is an emerging examination topic.

Conclusion: The Forensic Frontier

GAN Signature Analysis is not merely a technological curiosity — it is the forensic response to one of the most serious threats to truth, justice, and public trust in the digital age. Every deepfake, no matter how visually perfect, carries the invisible mathematical fingerprint of its AI creator. Forensic science's task is to find that fingerprint, interpret it, and present it as evidence in the pursuit of justice.

India's NCRB-IIT collaboration represents a nationally significant investment in forensic capability. As the systems mature, are validated, and achieve legal recognition through evolving judicial frameworks, GAN Signature Analysis will become as foundational to digital forensics as fingerprint analysis is to physical crime scenes.

For the next generation of forensic scientists, the message is clear: the crime scene of the future is digital, the evidence is invisible, and the tools to find it are being built today.

References & Cited Sources

  1. IJFMR (2025). Deepfake Evidence and the Indian Criminal Justice System. International Journal for Multidisciplinary Research. www.ijfmr.com
  2. Pi-Labs (2024). Digital Deception Epidemic: 2024 Report on Deepfake Fraud's Toll on India.
  3. PMC / NIH (2025). Unmasking Digital Deceptions: An Integrative Review of Deepfake Detection, Multimedia Forensics, and Cybersecurity Challenges. PubMed Central.
  4. MDPI Applied Sciences (2025). Advancing GAN Deepfake Detection: Mixed Datasets and Comprehensive Artifact Analysis. doi:10.3390/app15020923
  5. Springer Nature (2025). Enhancing Deepfake Detection with Adaptive-DCGAN and Lite-CNN. Discover Applied Sciences.
  6. Wiley (2025). A Review of Deepfake and Its Detection: From GANs to Diffusion Models. International Journal of Intelligent Systems.
  7. arxiv / WIFS (2019). Detecting and Simulating Artifacts in GAN Fake Images. Zhang, Karaman & Chang.
  8. Outlook Business (2025). Game of Shadows: India's Deepfake Dilemma. outlookbusiness.com
  9. PIB, Ministry of Home Affairs (2025). Rise of AI-Driven Cybercrime and Measures to Curb Financial Losses. pib.gov.in
  10. The Cyber Express (2025). India Sees Sharp Rise in Cybercrime, NCRB Data Reveals. thecyberexpress.com
  11. CERT-In Advisory on Deepfake Threats (November 2024). Government of India.
  12. Law Commission of India. Report on Digital Evidence and AI (2024).
  13. ACM TMCCA (2025). Spotting the Fakes: A Deep Dive into GAN-Generated Face Detection.
  14. ScienceDirect (2026). Adversarial and Generative AI-Based Anti-Forensics in Audio-Visual Deepfake Detection.
  15. Medianama (2025). India's Cybercrime Cases Surge by 500% Since 2021.

  Budding Forensic Expert  |  buddingforensicexpert.in

A community run by NFSU Alumni and Forensic Experts

© 2026 Budding Forensic Expert  |  All Rights Reserved



Post a Comment

0Comments

Post a Comment (0)