In the wake of the rising cases of deepfakes, a division bench of the Delhi High Court, comprising Chief Justice Manmohan and Justice Tushar Rao Gedela heard two writ petitions, filed by Rajat Sharma, Editor-in-chief of India TV, and Advocate Chaitanya Rohilla, on the issue. During the hearing, the Ministry of Electronics and Information Technology (MeitY) apprised the Court that it would soon constitute a dedicated committee to give suggestions on the regulation of deepfake technology. The High Court then asked the committee to give recommendations on the creation, detection and removal of deepfakes.
The counsel for the petitioners stated that each day’s delay in creation, detection and removal of deepfakes is causing immense hardship to the public at large. deepfakes are a portmanteau of “deep learning” and “fake”; images, videos, or audio which are edited or generated using artificial intelligence (AI) tools, and which may depict real or non-existent people. They are a type of synthetic media and modern form of a social media scam or malicious prank.
The bench directed the Union of India to name the nominees on the committee relating to the issue of deepfakes within a week. It also asked the committee to examine and take into consideration the suggestions filed by the petitioners and consider the regulations as well as statutory framework in foreign countries like the European Union.
The bench further directed the committee to invite and hear the experiences and suggestions of a few of the stakeholders like the intermediary platforms, telecommunication service providers, victims of deepfakes and websites which provide and deploy deepfakes, before submitting its report. The Court asked the committee to submit its report preferably within three months and listed the matter for hearing in March 2025.
In a status report filed before the High Court, MeitY said that a sub-committee was formed through an office memorandum issued on November 20, comprising one member each from the Emerging Technologies division, the Cyber Security division and the Cyber Law division of MeitY to specifically deal with the issue of deepfakes.
The committee would further comprise one representative each from the Indian Cyber Crime Coordination Centre (I4C), which comes under the Ministry of Home Affairs; the Centre For Development of Advance Computing (C-DAC), Hyderabad; and the Data Security Council of India. It would also consist of one professor from the Indian Institute of Technology, Madras, and one legal representative.
While perusing the status report, the High Court noted that it did not contain the names of committee members. Appearing for the Union government, Additional Solicitor General Chetan Sharma said that the process of nominating names would be expedited. It clarified that a sub-committee was previously formed in March 2023 to give suggestions on regulating AI technology, which has already submitted its report. However, the newly-formed committee would specifically deal with the issue of deepfakes.
Appearing for Sharma, Advocate Wadhwa contended that social media intermediaries are the primary stakeholders in regulating deepfakes. The Court then directed the committee to hear the suggestions of intermediaries such as Facebook and X (formerly Twitter), before filing its report. Wadhwa further submitted that rules on the removal of content based on user requests currently mandated that prohibited content must be removed within 72 hours. He pleaded for a shorter compliance period. The High Court asked the petitioners what would happen if a user falsely reported material as deepfake. Wadhwa replied that the balance of convenience needed to be in favour of the complainant.
Rohilla’s petition sought directions to the government to identify and block websites providing access to deepfake technology, issue dynamic injunctions, laying down guidelines for AI regulation, ensuring fair implementation of AI, and issuing guidelines for AI and deepfake access in strict accordance with fundamental rights. The plea further raised concerns about privacy violations and economic and emotional damage due to deepfakes. In compliance with the directions issued by the Court on October 24, 2024, the counsel for the MeitY had handed over a status report dated November 21, 2024, it said.
MeitY is funding a research project titled, “Fake Speech Detection using Deep Learning Framework”, which commenced in December 2021 and is scheduled to conclude in December 2024. The project has a financial outlay of Rs 47, 846 lakh and aims to deliver the following outcomes:
(i) Deepfake speech detection system capable of classifying given speech signals as either genuine or fake.
(ii) Software system accessible via a web interface for deepfake speech detection.
(iii) Speaker verification software platform designed for testing any fake speech detection system.
(iv) Comprehensive software system for the detection of deepfake speech.
A project titled, “Design and Development of Software System for Detecting and Flagging Deepfake Videos and Images” was funded by MeitY under its R&D in IT initiative for the period January 25, 2022 to March 31, 2024. The implementing agencies for the project are the C-DAC, Hyderabad, and CDAC, Kolkata.
The primary objectives of the project are to develop a system capable of detecting and flagging AI-synthesized fake images and videos, including those with fabricated voices, as fake or doctored and to create a web portal facilitating the crowd-sourcing of deepfake media (images, videos, and audio) and enabling users to flag such content as either fake or authentic.
Under the project, the C-DAC has developed a software application for deepfake detection. The prototype tool has been designed as both a web portal and a desktop application. The web-based tool is currently in the testing phase and provides limited access.
Additionally, the desktop application, titled, “FakeCheck” offers similar functionalities as the web portal and is specifically developed for users requiring deepfake detection capabilities in offline environments, without the need for an internet connection. The tool has been shared with select law enforcement agencies for testing and to obtain feedback for further refinement. An Advisory Group on AI for India-specific regulator AI framework was constituted on September 25, 2023. The Advisory Group has submitted its report and it is under examination. It was submitted that MeitY is actively undertaking measures to address and mitigate issues related to deepfake technology in a timely and effective manner.
While the act of creating fake content is not new, deepfakes uniquely leverage the technological tools and techniques of machine learning and AI, including facial recognition algorithms and artificial neural networks such as variational auto-encoders and generative adversarial networks. In turn, the field of image forensics develops techniques to detect manipulated images.
Academics have raised concerns about the potential of deepfakes to promote disinformation and hate speech and interfere with elections. From traditional entertainment to gaming, deepfake technology has evolved to become increasingly convincing and is available to the public, thus allowing for the disruption of the entertainment and media industries.
The deployment of deepfakes with the intention of pilfering an individual’s identity, fabricating fictitious portrayals of people, or manipulating public sentiment can inflict harm upon an individual’s reputation, as well as propagate disinformation. The invocation of Sections 420 and 468 of the Indian Penal Code, 1860, could potentially be invoked in this context. These sections provide for punishment in the form of imprisonment and fine, in case of cheating and forgery.
Section 66E of the Information Technology (IT) Act, 2000, is applicable in cases pertaining to deepfake offenses encompassing the capturing, dissemination, or transmission of an individual’s visual representations through mass media, thereby infringing upon his/her right to privacy. This transgression carries the potential penalty of imprisonment for a duration of three years or a monetary fine amounting to Rs 2 lakh.
Another pertinent provision within the IT Act is Section 66D. It encompasses the legal framework to prosecute individuals who employ communication devices or computer resources with malicious intentions, aiming to deceive or assume the identity of another person, thereby exposing them to the prospect of incarceration for a period of three years and/or a monetary fine up to Rs 1 lakh. These particular sections of the IT Act can be invoked to hold accountable those individuals implicated in deep fake cybercrimes within the jurisdiction of India.
The utilization of deepfakes for the dissemination of false information, the subversion of the government, or the incitement of hatred and disaffection towards the government is a matter of utmost concern and possesses the potential to bring about extensive consequences for the society in general. The propagation of inaccurate or deceiving data has the capability to instigate confusion and undermine the trust of the general public. Additionally, it can be employed as a means to manipulate public sentiment and exert influence over political outcomes.
These crimes can be subject to legal prosecution under Section 66-F (cyber terrorism) of the IT Act and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022. The propagation of hate speech and online defamation through the utilization of deepfakes can engender consequential predicaments that have the potential to inflict harm upon both individuals and society on a collective level.
—By Adarsh Kumar and India Legal Bureau
The post appeared first on .
The counsel for the petitioners stated that each day’s delay in creation, detection and removal of deepfakes is causing immense hardship to the public at large. deepfakes are a portmanteau of “deep learning” and “fake”; images, videos, or audio which are edited or generated using artificial intelligence (AI) tools, and which may depict real or non-existent people. They are a type of synthetic media and modern form of a social media scam or malicious prank.
The bench directed the Union of India to name the nominees on the committee relating to the issue of deepfakes within a week. It also asked the committee to examine and take into consideration the suggestions filed by the petitioners and consider the regulations as well as statutory framework in foreign countries like the European Union.
The bench further directed the committee to invite and hear the experiences and suggestions of a few of the stakeholders like the intermediary platforms, telecommunication service providers, victims of deepfakes and websites which provide and deploy deepfakes, before submitting its report. The Court asked the committee to submit its report preferably within three months and listed the matter for hearing in March 2025.
In a status report filed before the High Court, MeitY said that a sub-committee was formed through an office memorandum issued on November 20, comprising one member each from the Emerging Technologies division, the Cyber Security division and the Cyber Law division of MeitY to specifically deal with the issue of deepfakes.
The committee would further comprise one representative each from the Indian Cyber Crime Coordination Centre (I4C), which comes under the Ministry of Home Affairs; the Centre For Development of Advance Computing (C-DAC), Hyderabad; and the Data Security Council of India. It would also consist of one professor from the Indian Institute of Technology, Madras, and one legal representative.
While perusing the status report, the High Court noted that it did not contain the names of committee members. Appearing for the Union government, Additional Solicitor General Chetan Sharma said that the process of nominating names would be expedited. It clarified that a sub-committee was previously formed in March 2023 to give suggestions on regulating AI technology, which has already submitted its report. However, the newly-formed committee would specifically deal with the issue of deepfakes.
Appearing for Sharma, Advocate Wadhwa contended that social media intermediaries are the primary stakeholders in regulating deepfakes. The Court then directed the committee to hear the suggestions of intermediaries such as Facebook and X (formerly Twitter), before filing its report. Wadhwa further submitted that rules on the removal of content based on user requests currently mandated that prohibited content must be removed within 72 hours. He pleaded for a shorter compliance period. The High Court asked the petitioners what would happen if a user falsely reported material as deepfake. Wadhwa replied that the balance of convenience needed to be in favour of the complainant.
Rohilla’s petition sought directions to the government to identify and block websites providing access to deepfake technology, issue dynamic injunctions, laying down guidelines for AI regulation, ensuring fair implementation of AI, and issuing guidelines for AI and deepfake access in strict accordance with fundamental rights. The plea further raised concerns about privacy violations and economic and emotional damage due to deepfakes. In compliance with the directions issued by the Court on October 24, 2024, the counsel for the MeitY had handed over a status report dated November 21, 2024, it said.
MeitY is funding a research project titled, “Fake Speech Detection using Deep Learning Framework”, which commenced in December 2021 and is scheduled to conclude in December 2024. The project has a financial outlay of Rs 47, 846 lakh and aims to deliver the following outcomes:
(i) Deepfake speech detection system capable of classifying given speech signals as either genuine or fake.
(ii) Software system accessible via a web interface for deepfake speech detection.
(iii) Speaker verification software platform designed for testing any fake speech detection system.
(iv) Comprehensive software system for the detection of deepfake speech.
A project titled, “Design and Development of Software System for Detecting and Flagging Deepfake Videos and Images” was funded by MeitY under its R&D in IT initiative for the period January 25, 2022 to March 31, 2024. The implementing agencies for the project are the C-DAC, Hyderabad, and CDAC, Kolkata.
The primary objectives of the project are to develop a system capable of detecting and flagging AI-synthesized fake images and videos, including those with fabricated voices, as fake or doctored and to create a web portal facilitating the crowd-sourcing of deepfake media (images, videos, and audio) and enabling users to flag such content as either fake or authentic.
Under the project, the C-DAC has developed a software application for deepfake detection. The prototype tool has been designed as both a web portal and a desktop application. The web-based tool is currently in the testing phase and provides limited access.
Additionally, the desktop application, titled, “FakeCheck” offers similar functionalities as the web portal and is specifically developed for users requiring deepfake detection capabilities in offline environments, without the need for an internet connection. The tool has been shared with select law enforcement agencies for testing and to obtain feedback for further refinement. An Advisory Group on AI for India-specific regulator AI framework was constituted on September 25, 2023. The Advisory Group has submitted its report and it is under examination. It was submitted that MeitY is actively undertaking measures to address and mitigate issues related to deepfake technology in a timely and effective manner.
While the act of creating fake content is not new, deepfakes uniquely leverage the technological tools and techniques of machine learning and AI, including facial recognition algorithms and artificial neural networks such as variational auto-encoders and generative adversarial networks. In turn, the field of image forensics develops techniques to detect manipulated images.
Academics have raised concerns about the potential of deepfakes to promote disinformation and hate speech and interfere with elections. From traditional entertainment to gaming, deepfake technology has evolved to become increasingly convincing and is available to the public, thus allowing for the disruption of the entertainment and media industries.
The deployment of deepfakes with the intention of pilfering an individual’s identity, fabricating fictitious portrayals of people, or manipulating public sentiment can inflict harm upon an individual’s reputation, as well as propagate disinformation. The invocation of Sections 420 and 468 of the Indian Penal Code, 1860, could potentially be invoked in this context. These sections provide for punishment in the form of imprisonment and fine, in case of cheating and forgery.
Section 66E of the Information Technology (IT) Act, 2000, is applicable in cases pertaining to deepfake offenses encompassing the capturing, dissemination, or transmission of an individual’s visual representations through mass media, thereby infringing upon his/her right to privacy. This transgression carries the potential penalty of imprisonment for a duration of three years or a monetary fine amounting to Rs 2 lakh.
Another pertinent provision within the IT Act is Section 66D. It encompasses the legal framework to prosecute individuals who employ communication devices or computer resources with malicious intentions, aiming to deceive or assume the identity of another person, thereby exposing them to the prospect of incarceration for a period of three years and/or a monetary fine up to Rs 1 lakh. These particular sections of the IT Act can be invoked to hold accountable those individuals implicated in deep fake cybercrimes within the jurisdiction of India.
The utilization of deepfakes for the dissemination of false information, the subversion of the government, or the incitement of hatred and disaffection towards the government is a matter of utmost concern and possesses the potential to bring about extensive consequences for the society in general. The propagation of inaccurate or deceiving data has the capability to instigate confusion and undermine the trust of the general public. Additionally, it can be employed as a means to manipulate public sentiment and exert influence over political outcomes.
These crimes can be subject to legal prosecution under Section 66-F (cyber terrorism) of the IT Act and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022. The propagation of hate speech and online defamation through the utilization of deepfakes can engender consequential predicaments that have the potential to inflict harm upon both individuals and society on a collective level.
—By Adarsh Kumar and India Legal Bureau
The post appeared first on .