By: Chaksham Kumar Das
Introduction
Technology keeps changing so fast that laws often can’t keep up. One big example of this is deepfake technology—where AI is used to create fake videos or audio that look and sound very real. While deepfakes can be fun or useful in movies, education, and creative arts, they’ve also created serious problems like fake news, online harassment, fraud, and political manipulation.
In India, where millions of people use the internet daily, deepfakes can spread extremely fast. A misleading video can go viral before anyone realizes it’s fake—and by then, it might have already harmed someone’s reputation, destroyed a relationship, torn apart public trust, or even caused chaos during an election. For instance, social media platforms have sometimes inadvertently amplified misleading or harmful deepfake content before fact-checkers could respond.
This raises a critical question: Are India’s current laws, especially the Information Technology (IT) Act, 2000, strong and specific enough to handle deepfakes? Could someone be punished for making or spreading a harmful deepfake under existing laws, or do we need brand-new regulations? In this comprehensive analysis, we’ll explore what deepfakes are, how they are misused, how India’s IT law handles them (or fails to), and what can be done to make things better.
What are Deepfakes and Why are They Dangerous?
Understanding Deepfake Technology
Deepfakes are created using a sophisticated form of artificial intelligence called Generative Adversarial Networks (GANs). These systems learn from large datasets of real images, video frames, and audio clips, and then create fake ones that look and sound almost identical to the real thing. As AI technology improves, detecting these fake videos or audio using just human perception has become extremely difficult—even experts can be fooled.
Real Dangers of Deepfake Technology
1. Reputation Damage
Imagine a fabricated video showing a student in a compromising position, or a business leader admitting to wrongdoing they never did. Such deepfakes can appear on WhatsApp or social media groups, and within hours, a person’s reputation may be destroyed—jobs lost, friendships strained, and trust broken. Victims often feel helpless watching the content spread faster than they can prove its falsehood.
2. Non-Consensual Pornography
Surveys show that most deepfake content online today involves fake adult videos, typically targeting women. Ordinary people—not public figures—can end up victims. These videos are especially destructive when viral; victims may suffer psychological trauma, social stigma, and even offline threats. Often, victims don’t even know the content exists until others bring it to their attention.
3. Political Misuse
During elections, a deepfake of a politician saying something inflammatory can spread widely on social feeds, making it hard for voters to know what’s real. This can distort public opinion. As we saw in recent global political campaigns, misinformation spreads aggressively, especially when attached to video – the most trusted medium for many.
4. Online Scams and Fraud
Scammers now use deepfakes to impersonate company executives or bankers in phone calls. Employees receive an audio clip supposedly from a CEO instructing them to transfer large sums for urgent deals. Since it sounds convincingly like the real voice, fraudsters can trick staff into wiring money, sometimes to accounts they can’t recover funds from. These scams have already cost companies millions and are only growing more widespread.
5. Social Unrest and Violence
Deepfakes can also target communities, inventing footage of religious leaders or activists calling for violence or hate, sparking anger and protest. In a context where tensions are already high, such videos could incite real-world violence. Imagine a video showing a public figure praising a violent attack – even if false, it can spark retaliation.
These uses demonstrate how dangerous deepfakes are, and yet, India doesn’t have any specific law against creating or spreading harmful synthetic media. That legal void leaves people vulnerable.
Does the IT Act, 2000 Cover Deepfakes?
Historical Context of the IT Act
The IT Act, 2000 was written at a time when the internet wasn’t nearly as advanced as it is today. It mainly deals with early cybercrime, data protection, digital signatures, and electronic transactions. Deepfakes didn’t exist back then, so the Act doesn’t mention them at all.
Relevant Sections That May Apply to Deepfakes
However, some sections may be used to punish wrongdoers depending on the context:
Section 66E – Privacy Violations
This section covers the violation of privacy by capturing or transmitting private images or video without consent. This could apply to deepfakes that show someone nude or in an intimate scenario without their permission.
Section 66D – Cheating Through Impersonation
This provision deals with cheating through impersonation using computer resources. If someone uses a deepfake to trick others—say, posing as a company CEO to authorize a bank transfer—this section may apply.
Sections 67 and 67A – Obscene Content
These sections ban sharing of obscene or sexually explicit content electronically. This includes deepfake pornography; penalties can go up to seven years in jail for production or transmission.
Section 79 – Intermediary Liability
This section limits intermediary liability for platforms like YouTube or Facebook, but also forces them to act on harmful content once notified. They need to remove it quickly, or they could be held accountable if they don’t.
The Critical Gap in Current Legislation
The big problem is that none of these sections define or call out deepfakes specifically. Prosecutors and police have to stretch these laws in every case, and courts may struggle because the offenses don’t neatly fit current legal definitions. That legal uncertainty often discourages victims from even filing cases—if it’s unclear whether something is illegal, why go through the hassle?
Problems in Applying Current Law to Deepfakes
1. No Legal Definition of Deepfakes
Since deepfakes aren’t in the statutes, judges have no explicit legal language to cite. Police must frame charges under privacy law, cheating, obscenity, or defamation, depending on context—but those are imperfect fits and often hard to prove.
2. Technical Detection Challenges
Some deepfakes are so realistic that even video forensic experts struggle. Most local police cybercrime units in India lack specialized tools or staff to analyze AI-generated content. They rely on old-fashioned methods—digital copies from phones or screenshots—so the deeper AI-layer often stays hidden.
3. Difficulty in Source Tracking
Deepfake creators often use VPNs, anonymization tools, fake social accounts, or offshore hosting. Tracking them across borders requires mutual legal assistance treaties (MLATs), which is a long, complicated process. Many perpetrators remain safe behind technical and legal barriers.
4. Platform Response Time Issues
Even though platforms are legally required to respond in 36 hours, in practice, enforcement teams are overwhelmed. By the time content is removed, it may have been seen, shared, reposted, downloaded, or stored in private chats. Once a deepfake is in circulation, it can be nearly impossible to fully remove.
5. Jurisdictional Confusion
If a deepfake is hosted overseas but viewed in India, it may fall through legal gaps. No platform or law enforcement may take responsibility, and victims may not know which legal route to take—Indian law, foreign courts, or international cyber bodies.
Recent Legal Cases and Precedents
Landmark Cases Shaping Deepfake Law
Shreya Singhal v. Union of India (2015)
Though not about deepfakes, this landmark case struck down Section 66A of the IT Act for being vague. The ruling said any law limiting speech online must be clear and precise. That principle is important for future deepfake laws to avoid sweeping definitions that could inadvertently censor legitimate content.
Navsari Deepfake Case (2025)
A man from Gujarat was arrested after sharing a deepfake of Prime Minister Modi making false claims about a security operation. He was charged under Sections 66D (cheating) and 469 of the IPC (forgery). This is one of the first tests of how current law can handle politically sensitive deepfake content.
Venkatesh v. Union of India (2024)
A woman’s face was morphed into explicit content and circulated without her knowledge. Though she filed a public interest litigation demanding specific legislation, the Supreme Court noted the absence of direct laws on synthetic media. The case is still pending, underlining the legal vacuum.
International Influence on Indian Courts
While not Indian cases, rulings abroad help create judicial awareness. For instance, U.S. courts have ruled that a deepfake video used to defraud is a form of wire fraud, giving Indian courts persuasive context when interpreting Sections 66D and other fraud-related provisions.
What Can Victims Do Right Now?
Available Legal Remedies Under Current Law
Even though there’s no deepfake-specific law, victims still have several options under current legislation:
1. Platform Takedown Requests
File a takedown request under Rule 3(2) of the 2021 Intermediary Guidelines. Victims or legal representatives can ask platforms to remove the deepfake; platforms have 36 hours to comply.
2. Criminal Complaints
Register a police complaint under Sections 66E (privacy), 66D (impersonation), 67/67A (obscenity), or Section 469 IPC (forgery, if the deepfake is used in fake legal documents, for example).
3. Specialized Cyber Crime Units
Approach cyber cell or cybercrime investigative branches, where trained officers may exhibit more technical awareness.
4. Civil Compensation Claims
Seek civil compensation under Section 43A of the IT Act if a corporate intermediary misused personal data—though this applies more to data breaches than deepfake creation.
5. Defamation Lawsuits
File a defamation case in civil court for reputational damage, seeking injunctions and monetary compensation.
These routes are available, but often slow, expensive, and reactive—meaning a person has to suffer the harm before legal relief can be sought.
International Approaches to Deepfake Regulation
United States
No unified federal law exists, but some states like California have banned deepfakes in political ads and non-consensual adult content. Laws require disclosure or prohibitions during elections. Federal proposals like the DEEPFAKES Accountability Act are still under review.
European Union
The Digital Services Act (2023) requires large online platforms to label AI-generated or manipulated media, especially during elections. Under GDPR, platforms must also ensure users’ privacy isn’t violated by sharing such content.
China
In 2023, China passed a regulation requiring deepfakes to carry visible watermarks and penalizing creators and platforms if malicious content is produced or shared without clear labeling.
Australia
Proposed reforms include asking platforms to create transparency reports on synthetic media and to provide fast response systems for victims.
Canada
Introduced legal proposals to classify deepfakes under “misrepresentation offences” with stricter punishments for non-consensual intimate deepfakes and fraudulent political content.
By examining these international frameworks, India can learn what works—and what pitfalls to avoid, especially in balancing free speech with protection.
Comprehensive Recommendations for India
1. Create Dedicated Deepfake Legislation
Introduce a dedicated statute or amendment clearly defining “deepfakes” and “synthetic media.” The law should criminalize creating, distributing, or possessing harmful deepfakes, and set higher penalties for cases like non-consensual intimate content or political disinformation.
2. Upgrade Police Tools and Training
Establish regional cyber-forensic labs with AI analysis tools. Provide regular training to police officers and prosecutors about deepfake detection, using tools like metadata analysis and blockchain-backed verification.
3. Amend the IT Act
Insert sections on deepfakes, require intermediaries to deploy AI-based moderation, and create mechanisms for immediate takedown requests. Platforms should face penalties if they repeatedly fail to remove harmful content on time.
4. Establish Quick Relief Mechanisms for Victims
Set up a national digital portal—like a “digital ombudsman”—where victims can report deepfakes with evidence. Mandate takedowns within 24 hours of filing, or face liability. Victims should also have access to free legal aid and mental health counselling.
5. Strengthen International Cooperation
Since many deepfake creators operate from abroad, strengthen bilateral agreements focused on tech crimes, mutual legal assistance, and extradition. Collaborate with global tech companies to share threat intelligence and detection methods.
6. Launch Awareness and Education Campaigns
Run nationwide digital literacy campaigns to help people spot deepfakes. Integrate modules on synthetic media ethics into school curricula, and train journalists in AI content verification.
7. Support Research and Innovation
Provide government funding for academic and industry research on deepfake detection algorithms. Encourage public-private partnerships and open-source tools to identify AI-generated media.
The Path Forward: Balancing Innovation and Protection
Challenges in Implementation
Creating effective deepfake legislation requires careful balance between protecting citizens and preserving legitimate uses of AI technology. The law must be specific enough to address real harms while avoiding overly broad language that could stifle innovation or legitimate speech.
Role of Technology Companies
Social media platforms and technology companies must play a proactive role in detecting and removing harmful deepfake content. This includes investing in AI-powered detection systems and working with law enforcement agencies to track malicious actors.
Public-Private Partnership Model
The most effective approach likely involves collaboration between government agencies, technology companies, civil society organizations, and academic institutions. This multi-stakeholder approach can ensure comprehensive solutions that address technical, legal, and social aspects of the deepfake challenge.
Conclusion
Deepfakes aren’t just funny filters or cool tech—they can ruin lives, spread fake news, and undermine democratic processes. India’s IT Act does offer some protection, but it’s not nearly enough for the level of threat we’re facing now.
The current legal framework, while providing some avenues for recourse, lacks the specificity and comprehensiveness needed to effectively combat the deepfake menace. The absence of clear definitions, inadequate technical infrastructure, and jurisdictional challenges create significant gaps that malicious actors can exploit.
However, by learning from international best practices and implementing comprehensive reforms that include dedicated legislation, improved technical capabilities, victim support mechanisms, and public awareness campaigns, India can build a robust defense against AI-driven synthetic media threats.
The time to act is now. As deepfake technology becomes more sophisticated and accessible, the window for proactive legal and regulatory response is narrowing. With clear laws, better tools, international cooperation, and sustained commitment to protecting citizens’ rights in the digital age, India can set a strong example in safeguarding democracy and individual dignity against the misuse of artificial intelligence.
Frequently Asked Questions (FAQs)
What exactly are deepfakes and how are they created?
Deepfakes are AI-generated synthetic media where a person appears to say or do things they never actually did. They are created using Generative Adversarial Networks (GANs), which are trained on large datasets of images, videos, or audio to create convincing fake content that can be difficult to distinguish from authentic media.
Is creating or sharing deepfakes illegal in India?
Currently, there is no specific law in India that directly addresses deepfakes. However, depending on the content and intent, deepfakes may be prosecuted under various sections of the IT Act, 2000, such as Section 66E (privacy violations), Section 66D (cheating through impersonation), or Sections 67/67A (obscene content).
What should I do if I become a victim of harmful deepfake content?
If you’re a victim of harmful deepfake content, you can: file a takedown request with the platform under the 2021 Intermediary Guidelines, register a police complaint under relevant sections of the IT Act or IPC, approach specialized cybercrime units, seek civil compensation for damages, or file a defamation case in civil court.
How can I identify if a video or audio is a deepfake?
While deepfakes are becoming increasingly sophisticated, some telltale signs include: unnatural eye movements or blinking patterns, inconsistent lighting or shadows, audio that doesn’t match lip movements perfectly, blurred or distorted edges around the face, and unusual facial expressions or movements that seem out of character.
What penalties exist for creating malicious deepfakes under current Indian law?
Under current law, penalties vary depending on the section applied. For instance, violations under Section 67A (sexually explicit content) can result in imprisonment up to seven years and fines up to ₹10 lakh. Section 66D (cheating by impersonation) carries penalties of up to three years imprisonment and fines up to ₹1 lakh.
How do social media platforms handle deepfake content in India?
Social media platforms are required under the 2021 Intermediary Guidelines to remove harmful content within 36 hours of being notified. However, enforcement varies, and many platforms are developing AI-powered detection systems to proactively identify and remove deepfake content.
What international best practices can India adopt for deepfake regulation?
India can learn from various international approaches, including the EU’s Digital Services Act requiring labeling of AI-generated content, China’s watermarking requirements for deepfakes, California’s specific bans on deepfakes in political contexts, and Canada’s classification of deepfakes under misrepresentation offenses.
Are there any legitimate uses of deepfake technology?
Yes, deepfake technology has several legitimate applications including film and entertainment production, educational content creation, language dubbing, historical recreations, and therapeutic applications. Any regulation must balance protection from harm with preserving these beneficial uses.
What role do cybercrime cells play in investigating deepfake cases?
Cybercrime cells are specialized units within police departments that handle technology-related crimes. They have better technical expertise and resources to investigate deepfake cases, including digital forensics capabilities and knowledge of relevant legal provisions.
How can educational institutions help combat the deepfake problem?
Educational institutions can integrate digital literacy programs that teach students to identify synthetic media, include ethics modules about responsible AI use in curricula, conduct awareness campaigns about the dangers of deepfakes, and support research into detection technologies.
What is the timeline for implementing specific deepfake legislation in India?
While there’s no official timeline, given the urgency of the issue and recent cases highlighting legal gaps, experts suggest that comprehensive deepfake legislation could be introduced within the next 1-2 years, possibly as amendments to the existing IT Act or as standalone legislation.
How effective are current AI detection tools for identifying deepfakes?
Current AI detection tools have varying degrees of effectiveness, with some achieving high accuracy rates in controlled conditions. However, as deepfake technology evolves, detection tools must continuously improve. The most effective approach combines automated detection with human verification and cross-referencing with original sources.
Can I sue someone for making a deepfake of me without consent?
Yes, you can file both criminal and civil cases. For criminal action, you can file under relevant sections of the IT Act or IPC depending on the nature of the deepfake. For civil remedies, you can sue for defamation, seek injunctive relief to stop distribution, and claim monetary damages for harm to reputation, mental distress, and financial losses.
What should employers do if their company is targeted by deepfake fraud?
Companies should immediately report the incident to cybercrime authorities, document all evidence including the fraudulent content and any financial transactions, notify their bank and payment processors, implement additional verification protocols for financial transactions, conduct employee training on deepfake awareness, and consider cyber insurance claims if covered.
Are deepfake apps like FaceSwap or Reface legal to use in India?
Using deepfake apps for personal entertainment or legitimate creative purposes is generally legal. However, creating content that violates privacy, depicts someone in compromising situations without consent, or is used for fraud becomes illegal under various provisions of the IT Act and IPC, regardless of the app used.
How do courts determine if synthetic media evidence is admissible?
Indian courts evaluate synthetic media evidence based on authenticity, chain of custody, expert testimony on creation methods, metadata analysis, and relevance to the case. The Evidence Act provisions on electronic evidence apply, and courts may require technical expert testimony to establish whether content is genuine or artificially created.
What happens to deepfake content shared in private WhatsApp groups?
Sharing deepfakes in private groups doesn’t provide legal immunity. If the content violates privacy, contains obscene material, or causes defamation, creators and sharers can still face legal action. WhatsApp may be required to provide user data and message details to law enforcement upon proper legal requests.
Can politicians use deepfakes for campaign purposes in India?
Currently, there are no specific restrictions on using deepfakes in political campaigns in India, unlike some other countries. However, if such content spreads false information, violates election commission guidelines, or defames opponents, it could face action under existing laws and election regulations.
What is the difference between morphing and deepfakes in legal terms?
While both involve manipulating images or videos, deepfakes use AI to create entirely synthetic content, whereas morphing typically involves combining or altering existing images. Legally, both can be prosecuted under similar sections depending on intent and harm caused, though deepfakes often involve more sophisticated technology and may be harder to detect.
How long does it take to remove deepfake content from social media platforms?
Under Indian law, platforms have 36 hours to remove content after being notified. However, actual removal times vary by platform and content type. Emergency requests for particularly harmful content may be processed faster, while complex cases requiring human review may take longer.
Can deepfakes be used as evidence in Indian courts?
Deepfakes generally cannot be used as legitimate evidence since they are artificially created content. However, they may be presented as exhibits to show defamation or harassment. Courts require proper authentication and expert testimony to distinguish between genuine and synthetic media when evaluating evidence.
What are the psychological impacts of deepfake victimization?
Victims often experience severe psychological trauma including anxiety, depression, social isolation, loss of trust, damaged relationships, and in extreme cases, suicidal thoughts. The persistent nature of digital content and potential for re-sharing makes recovery particularly challenging, highlighting the need for comprehensive victim support services.
How can journalists verify if news content contains deepfakes?
Journalists should use multiple verification methods including reverse image/video searches, metadata analysis, cross-referencing with original sources, consulting technical experts, using AI detection tools, and maintaining verification databases. News organizations are increasingly investing in deepfake detection technologies and training.
What role do internet service providers play in deepfake regulation?
ISPs can be asked to block access to websites hosting malicious deepfake content. They may also be required to preserve traffic data for investigations and cooperate with law enforcement in tracking the source of harmful content. However, their liability is generally limited under intermediary safe harbor provisions.
Can deepfakes violate intellectual property rights in India?
Yes, deepfakes can violate personality rights, copyright in original images/videos used for training, trademark rights if used for commercial purposes without permission, and publicity rights of celebrities or public figures. Remedies may include injunctions and damages under relevant IP laws.
What is consent in the context of deepfake creation?
Legal consent for deepfake creation requires clear, informed, and voluntary agreement from the person whose likeness is being used. This consent should specify the purpose, duration, and scope of use. Importantly, consent can be withdrawn, and using someone’s likeness without proper consent may violate privacy and personality rights.
How do deepfakes affect divorce and family court proceedings?
Deepfakes can complicate family court cases by making it difficult to verify authentic evidence of misconduct or behavior. Courts may require additional technical verification for video/audio evidence. False deepfake accusations can also constitute harassment or defamation, affecting custody and settlement decisions.
What international cooperation exists for cross-border deepfake crimes?
India participates in various international frameworks including mutual legal assistance treaties (MLATs), Interpol cooperation, and bilateral agreements for cybercrime investigation. However, enforcement remains challenging due to jurisdictional complexities and varying legal standards across countries.
Are there any insurance policies that cover deepfake-related damages?
Some cyber insurance policies are beginning to include coverage for deepfake-related damages, including reputation management costs, legal expenses, and business interruption losses. However, coverage varies significantly, and many policies still exclude or limit AI-related claims, making it important to review policy terms carefully.
How can parents protect their children from deepfake threats?
Parents should educate children about digital literacy and deepfake risks, monitor their online activities and social media presence, teach them not to share personal photos publicly, encourage reporting of suspicious content, consider privacy settings on all platforms, and maintain open communication about online safety concerns.
What is the role of blockchain technology in preventing deepfakes?
Blockchain can help create tamper-proof records of authentic content through digital timestamping and provenance tracking. This technology can verify the original source and creation date of media, making it easier to identify manipulated content. However, implementation challenges and cost considerations currently limit widespread adoption.
Can artificial intelligence be used to detect deepfakes?
Yes, AI-powered detection systems analyze various factors including facial inconsistencies, temporal artifacts, pixel-level anomalies, and behavioral patterns to identify synthetic content. However, this creates an “arms race” where detection AI improves alongside creation AI, requiring continuous technological advancement.
What are the economic impacts of deepfake technology on businesses?
Businesses face risks including financial fraud through executive impersonation, brand reputation damage, stock manipulation through false content, cybersecurity costs for detection systems, legal expenses for litigation, and potential loss of consumer trust. The global economic impact is estimated to reach billions annually.
How do deepfakes impact freedom of speech and expression?
Deepfake regulation must balance protecting individuals from harm while preserving legitimate free speech rights. Overly broad laws could censor satire, artistic expression, or political commentary. Courts must carefully evaluate whether restrictions are proportionate and whether less restrictive alternatives exist.
Also Read: