top of page
generate website back ground with academic light tone 3d texture.jpg
Copy of School of social scien and inter studies_edited.png

School of Social Science and Interdisciplinary Studies

Rajiv Gandhi National University of Law

Blogs

The Algorithmic Gaze: Deepfakes as Digital Gender-Based Violence in India

  • Devansh Saxena and Shreya Srivastava
  • Feb 6
  • 6 min read

Updated: Feb 7

Introduction

Artificial intelligence has pretty much skipped the stage of mere speculation and is now deeply ingrained in all levels of modern governance and social interaction. Machine-learning systems currently underlie the operation of financial services, healthcare diagnostics, education, and digital communication. On the other hand, those very tools have brought about new ways of doing harm alongside efficiency and innovation. One of the most contentious among these new developments is the rise of “deepfakes”:  AI-generated synthetic audio-visual content that can be hyper-realistically fabricated depictions of persons.[1]


Such technology, for example, can be employed legitimately in cinema or satire, but its misuse in the making of non-consensual sexually explicit content, predominantly targeting women, has been the most widespread one.[2] Women whose faces have been superimposed onto pornographic videos and such videos shared publicly online have given accounts of resulting threats, extortion, and public humiliation.[3] In India, where internet penetration and access to smartphones have grown in a spectacular manner, such occurrences are becoming ever more visible. As a result, women are experiencing situations in which their digitally manipulated faces are used to create porn videos without their consent and, most of the time; these are spread on the internet accompanied by threats, extortion, or public humiliation.


This pattern is not accidental. Deepfakes overwhelmingly target women and reproduce older forms of control over women’s bodies and sexuality. In this sense, the technology does not create new inequalities; it simply modernizes existing patriarchal structures in digital form. Despite a clear trend of such incidents in the real world and numerous reports in the news, the legal angle is still pretty much inattentive and fragmented. Laws already on the books typically handle such acts as obscenity, defamation, or minor cyber offences instead of treating them as serious violations of dignity and autonomy. These kind of deepfakes without the consent of the person involved is not just another usual crime on the internet but a form of gender-based violence. Looking at the issue through a constitutional perspective in terms of privacy, dignity, and equality, the lack of India’s legal setup is very evident. The existing laws, while appearing to be protective on paper, are structurally not fit to deal with the new forms of AI-generated abuse.


Deepfakes as Gendered Harm in the Digital Age

Deepfake technology is mainly built on generative adversarial networks (GANs), which enable artificial systems to create lifelike images by training on huge datasets.[4] Previously, it was a struggle to do this kind of stuff without a lot of technical knowledge, but now there are even free apps that let you do it with a few clicks. This availability has led to a disconcerting cycle: most of the deepfake videos online are pornographic and use the faces of women. Research suggests that most deepfake pornography is of women and that it is done without their knowledge or consent.[5]

Such a phenomenon should not be trivialized as mere media alteration. The harm from such a scenario is comparable to traditional sexual assault. Those victimized suffer loss of reputation, mental health problems, feelings of alienation, and at the same time deprivation of earning potential.[6] One should not overlook the fact that using someone else's image for sexual purposes without their agreement is a violation of personal identity and bodily integrity. Although the picture is fabricated, the victim's shame and the negative labels attached to her are genuine.


Indian constitutional law offers a sensible way to comprehend this harm. In the judgment of Justice K.S. Puttaswamy v Union of India, the Supreme Court recognized privacy as including informational self-determination, dignity, and autonomy.[7] Thus, a person’s right to decide how their image and identity will be used is an essential part of this right. Deepfakes are a complete breach of that right since they take one's image and use it in a sexual context without consent. Therefore, the harm goes beyond defamation or obscenity and touches the very constitutional rights.


A look at it from the gender justice angle reveals that the technology also copies already existing power hierarchies. Feminist scholars have long argued that the internet is another place where inequalities from the real world are reproduced.[8] Deepfakes make the situation worse by allowing unlimited sexual targeting with very little effort. Women's freedom to express their opinions in society is limited because of the constant fear that such pictures may be fabricated and used against them. Thus, deepfakes become not just one-person violations but also means of social exclusion at the structural level. The harm therefore is not only individual but structural. It reflects how technology is repeatedly used to police women’s autonomy and silence their participation in public life, reinforcing patterns that feminism has long identified as patriarchal control. In this sense, deepfakes operate as an “algorithmic gaze”: a technological form of looking that objectifies and controls women’s bodies at scale.


The Fragmented Nature of Indian Legal Protection

India is yet to have a specific law dedicated to tackling the misuse of synthetic media, despite its grave consequences. Victims have to depend on scattered provisions of various laws, none of which were drafted keeping AI-generated harm in mind. The Bharatiya Nyaya Sanhita (BNS), 2023, which replaced the IPC, lays down the current criminal law framework. Sections 79 (insulting the modesty of a woman) and 75 (sexual harassment) are the primary provisions, but they fail to fully cover the harm resulting from the algorithmic nature of the act.[9]


Besides, section 66E of the Information Technology Act, 2000 criminalizes the invasion of privacy, but its scope is limited to the “capturing” of real images and leaves a doctrinal gap for completely synthetic creations.[10] The Digital Personal Data Protection Act, 2023 has introduced a consent-based system for the usage of personal data, but the matter is still ambiguous when it concerns deepfakes.[11]


This patchwork can be characterized as a regime of multiple laws. There is formal protection, but the harm itself is not conceptually understood. Victims are left to figure out multiple legal routes that are imperfect and procedurally cumbersome. The absence of targeted regulation reflects a broader lag between technological change and legislative imagination.


Structural and Institutional Constraints

Besides the doctrinal weaknesses, there are also practical problems that significantly hamper enforcement. The distribution of online content is instantaneous, but judicial measures are slow. By the time a complaint is filed and adjudicated, the material may already have been copied across numerous platforms. The intermediary liability provisions under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 concentrate on post-facto takedown rather than pre-emptive action, thus the measures remain reactive.[12]


Law enforcement departments also have limited resources. The Bharatiya Sakshya Adhiniyam (BSA), 2023 requires strict certification of electronic records under section 63, which makes it difficult to authenticate manipulated media evidence in court.[13] Investigation of AI-generated alterations requires specialized forensic tools and technical know-how, which are still scattered resources. Hosting content in other countries further complicates jurisdiction and requires mutual legal assistance procedures. For many victims, such a process turns out to be more tiresome than the damage itself. Hence, these structural limitations reveal a contradiction at the core of India’s technology policy: the government, through the IndiaAI Mission, actively encourages AI innovation, but at the same time regulatory protective measures for users are still in their infancy.[14]


Conclusion

Deepfakes illustrate how emerging technologies can reproduce old forms of inequality in new digital formats. What appears to be a sophisticated algorithm ultimately becomes another mechanism through which women’s autonomy is undermined. The harm is not merely virtual; it is deeply personal, constitutional, and social. Indian law, anchored in pre-AI assumptions even within its newest codes, has yet to fully confront this reality. Existing provisions treat the issue as obscenity or defamation, failing to recognize it as gender-based violence. Until the legal system adapts conceptually and doctrinally, victims will continue to encounter remedies that are formally available but substantively inadequate. Recognizing deepfakes as violations of constitutional autonomy is the first step toward meaningful protection in the digital age. Unless these harms are recognized as part of a broader pattern of gender-based violence, deepfakes will remain another digital tool through which patriarchy adapts and survives.


This blog has been authored by Devansh Saxena and Shreya Srivastava, students at University of Allahabad. 


REFERENCES

[1] Telecommunication Engineering Centre, Artificial Intelligence (AI) Policies in India – A Status Paper (Government of India 2020).

[2] Robert Chesney and Danielle Citron, ‘Deep Fakes: A Looming Challenge for Privacy, Democracy and National Security’ (2019) 107 California Law Review 1753.

[3] Henry Ajder and others, The State of Deepfakes (Deeptrace 2019).

[4]Ian Goodfellow and others, ‘Generative Adversarial Nets’ (2014) Advances in Neural Information Processing Systems.

[5] Ajder (n 3).

[6] Danielle Keats Citron, Hate Crimes in Cyberspace (Harvard University Press 2014).

[7] Justice K.S. Puttaswamy v Union of India (2017) 10 SCC 1.

[8] Catharine MacKinnon, ‘Cyber Civil Rights’ (2000) 1 Boston University Law Review 61.

[9] Bharatiya Nyaya Sanhita 2023, ss 75, 79.

[10] Information Technology Act 2000, s 66E.

[11] Digital Personal Data Protection Act 2023.

[12] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021.

[13] Bharatiya Sakshya Adhiniyam 2023, s 63.

[14] Niti Aayog, National Strategy for Artificial Intelligence - #AIforAll (2018).


 


 
 
 

Comments


RAJIV GANDHI NATIONAL UNIVERSITY OF LAW, PUNJAB

School of Social Science and Interdisciplinary Studies

Rajiv_Gandhi_National_University_of_Law_Logo (1) (1).png
Copy of School of social scien and inter studies_edited_edited.png
FOLLOW US:
  • Instagram
  • LinkedIn
CONTACT US:

Rajiv Gandhi National University of Law, Sidhuwal - Bhadson Road, Patiala, Punjab - 147006

bottom of page