April 30, 2024April 30, 2024 Unmasking Deepfakes: Understanding, Identifying, and Mitigating Synthetic Content By Anchal Kanthed (4th Year Student, Institute of Law, Nirma University, Ahmedabad) Image Source: https://www.businesstoday.in/technology/news/story/worried-about-your-images-videos-being-used-as-a-deepfake-heres-how-to-keep-media-safe-405132-2023-11-09 Introduction to Deepfake Technology In the contemporary era, Artificial Intelligence and its misuse are buzzwords. However, it is important to determine their legal and ethical consequences so that there is awareness amongst the users about issues with such an AI technology. The current trending news in artificial intelligence is the use of ‘Deepfake’ technology. Deepfake comprises two words ‘deep’ and ‘fake’. Here, deep indicates deep learning which is a form of machine learning mechanism, and fake indicates synthesized or fabricated data presented to the users. According to a data analysis done by Reuters, in 2023, more than 5,00,000 deep fake voice and video messages were shared throughout the world.[1] Now, the question arises as what is this deepfake technology? And What are its legal consequences? Deepfake technology is a combination of machine learning algorithms that has the potential to evolve the entertainment industry and education industry. It generated new and synthetic data with already available original data. The new data is similar to the original data. It works on a system of neural networks that have a ‘generator’ and a ‘discriminator’. The generator is entrusted with the task of creating the fake data and the discriminator is entrusted with the task of comparing the synthetic data so created with the original data. There are two techniques used by deep fake technology: deep learning and generative adversarial networks (GANs). Deep learning can be termed as a part of machine learning that focuses on neural networks that are connected with each other and transmit networks from one layer to the other to enable the machine to perform a task by itself.[2] The figure below describes deep learning: GANs are networks that create false images and videos that involve the functioning of both the generator and the discriminator. The working of GANs is presented below: The prime examples of Deepfake technology are attribute enhancing, face swaps in photos and videos, face re-enactment also known as changing facial expressions and making material which fully made up. Recent Examples Numerous examples made headlines regarding the misuse of Deepfake technologies. Some of them are listed below: Ahead of elections, BJP president Manoj Tiwari’s deepfake video critiquing the AAP government of Arvind Kejriwal went viral on WhatsApp.[3] The deepfake video involving Rana Ayyub highlights the need to regulate laws relating to revenge pornography.[4] The deepfake video that went viral involving the US Ex-president Donald Trump showing a video of his arrest.[5] The deepfake video of Pope wearing a puffer jacket.[6] A short clip of what looks to be prominent Indian diva Rashmika Mandanna entering an elevator has gone viral in India and sparked outrage throughout the world.[7] Listing just a few examples would not suffice many such cases are unregulated because of a lack of regulation on the subject matter. IT Minister RS Prasad had also shown concern regarding the spread of deepfake technology and pressed on the need to address the issues relating to deepfake technology.[8] Though is Information Technology Act, 2000 and rules which regulate the misuse of AI according to the case of Sunilakhya v. HM Jadwet the court held that since criminal defamation necessitates the intent to injure another person’s reputation,[9] analogically whose deepfake video is made, defamation provisions under IPC will also apply to visual representations of deepfakes. Moreover, the Court in the case of Myspace Inc v. Super Cassettes Industries Ltd. also held that Indian intermediaries upon receiving the information that the video/ image/ audio circulated on the platform is deepfake, must delete such information within 24 hours.[10] Offences covered by Deepfakes There are several ways that crimes might be committed with the deepfake technology. While AI technology in and of itself is not dangerous, it may be a weapon for crimes against people and society. Deepfake may be used to perform the following crimes: Theft of identity and digital counterfeiting: Identity theft and deepfake virtual forgeries are severe crimes that may have a big impact on people’s lives and society at large. Utilizing deepfakes to assume someone else’s identity, fabricate personal narratives, or sway public opinion can damage someone’s credibility and reputation while disseminating misleading information. These offences may be punished under Section 66-C (the penalty for identity theft)[11] and Section 66[12] (computer-related offences) of the Information Technology Act. In this context, Sections 420[13] and 468[14] of IPC may also be used. Fake information against the government: Section 66F (cyberterrorism)[15] and IT Intermediary rules govern the crimes for spreading misinformation against the government. Sections 121[16] and 124[17] of the IPC could also be used for waging the war against the government. It is a severe problem with potentially far-reaching effects on society when deepfakes are used to propagate false information, undermine the government, or foster animosity and disenchantment against the government. Hate Speech: Hate speech and internet defamation using deepfakes are dangerous problems that may hurt communal and societal sentiments at large. Deepfakes may seriously damage people’s reputations and well-being and contribute to a toxic online environment when they are used to disseminate hate speech or libellous information. These offences are punishable by law under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022 of the Information Technology Act, 2000. Additionally, Sections 153-A[18] and 153-B[19] (Speech damaging public peace) and Section 499[20] (defamation) of IPC may also be invoked in this situation. Privacy Infringement: When a person’s image/ photograph is captured, morphed, published or transmitted then it entails privacy violations under section 66E of the IT Act. Such a provision for infringing the privacy of individuals makes the offence punishable for 3 years and/ or a fine of up to ₹ 1,00,000. Deepfake can also be used to infringe privacy by collecting sensitive personal data of users, online obscenity and pornography, cyber terrorism, digital and corporate fraud and corrupt election practises. A guide to recognizing manipulated content It is not that hard to tell if someone said anything or not, even with the ways that authentic Deepfakes produce visuals, sounds, or videos. There is not a single conclusive clue, though, for identifying a deepfake. However, a lot of information is falsified and is readily identified as Deepfake. Thus, keep these things in mind: Observe the face. Facial alterations are virtually always involved in high-end Deepfake manipulations.[21] Observe the forehead and cheekbones. Is the skin too wrinkled or too smooth? Is the age of the skin the same as the age of the eyes and hair? Deepfakes might be inconsistent in some ways. Take notice of the eyebrows and eyes. Do you see shadows where you would expect them to? It is possible that deepfakes don’t accurately capture the physics of a situation. Make use of the authentication mechanism to hear voice Deepfake Observe the amount of facial hair—or lack thereof. Does this seem like actual face hair? A moustache, sideburns, or beard may be added or removed in deepfakes. On the other hand, facial hair alterations may not seem entirely genuine using Deepfakes. Keep an eye out for blinking. Is the blink rate adequate or excessive? Be mindful of the lip motions. Some deepfakes rely on lip-syncing. Are the lip motions realistic-looking? Conclusion & Recommendations There is a need for Intermediary regulation not specifically in India but at the international level, there must be a uniform regulation of all big entities so that the forwarding of synthetic messages can be regulated globally. Moreover, there must be censorship and blocking technology are the global level which can be used to censor and block unwanted and useless content from all the social media platforms. Specifically in India, there must be amendments in the IT Act to include the definition of the deepfake technology and punishments relating to the deepfake technology in the Indian laws. The laws should be strengthened enough so that the culprits do not find any loopholes that take away their liability. There must be due diligence on the questionable content circulated by social media platforms. There must be investment by the social media entities in resolving the issues relating to deepfake technology so that the fake content can be detected. The policies of the entities must be clear about their efforts to regulate the deepfake content. There must be regular cybersecurity audits that help in mitigating the vulnerability and help in monitoring the social media platforms. Lastly, the intermediary platforms must have a response strategy against circulation of such deepfake content which can help social media platforms to have a plan ready to tackle the menace of deepfake content. Therefore, when used appropriately, deepfakes are less dangerous. In our rapidly changing digital environment, society must support AI and other cutting-edge technology as agents of constructive change. Deepfake technology thus provides great promise for creative innovation when creators—including businesses—adopt a conscientious and ethical perspective. Responsible use is essential to preserving the balance between creative expression and technical advancement. [1] Alexandra Ulmer & Anna Tong, Deepfaking it: America’s 2024 elcection collides with AI boom, Reuters, (Apr. 23, 2024, 12:56 AM), https://www.reuters.com/world/us/deepfaking-it-americas-2024-election-collides-with-ai-boom-2023-05-30/. [2] Microsoft Build, https://learn.microsoft.com/en-us/azure/machine-learning/concept-deep-learning-vs-machine-learning?view=azureml-api-2 (last visited Apr. 23, 2024). [3] Nilesh Christopher, We’ve just seen the first use of Deepfakes in an Indian election campaign, Vice, (Apr. 23, 2024, 1:02 AM), https://www.vice.com/en/article/jgedjb/the-first-use-of-deepfakes-in-indian-election-by-bjp. [4] Nina Jankowicz, The threat from deepfakes isn’t hypothetical, The Washington Post, (Apr. 23, 2024, 1: 17 AM), https://www.washingtonpost.com/opinions/2021/03/25/threat-deepfakes-isnt-hypothetical-women-feel-it-every-day/. [5] Megan Garber, The Trump AI Deepfakes Had an Unintended Side Effect, The Atlantic, (Apr. 23, 2024, 1:19 AM), https://www.theatlantic.com/culture/archive/2023/03/fake-trump-arrest-images-ai-generated-deepfakes/673510/. [6] Leah Dolan, Look of the week: What Pope Francis’ AI puffer coat says about the future of the fashion, CNN (Apr. 23, 2024, 1:25 AM), https://edition.cnn.com/style/article/pope-francis-puffer-coat-ai-fashion-lotw/index.html. [7] Alex Blair, Deepfake video of Indian star leaves nation fuming, New York Post, (Apr. 23, 2024, 1:23 AM), https://nypost.com/2023/11/07/tech/deepfake-video-of-indian-star-rashmika-mandanna-leaves-nation-fuming/ [8] The Quint, https://www.thequint.com/tech-and-auto/tech-news/ravi-shankar-prasad-on-deepfakes-fake-news-artificial-intelligence-parliament-winter-session-lok-sabha (last visited Apr. 23, 2024). [9] Sunilakhya Chowdhury v. H.M. Jadwet And Anr., AIR 1968 CAL 266 [10] My Space Inc. v. Super Cassettes Industries Ltd., CM APPL 20174/2011. [11] The Information Technology Act, 200, § 66C, No. 21 of 2000, Acts of Parliament (India). [12] The Information Technology Act, 200, § 66, No. 21 of 2000, Acts of Parliament (India). [13] Pen Code, § 420. [14] Pen Code, § 468. [15] The Information Technology Act, 200, § 66F, No. 21 of 2000, Acts of Parliament (India). [16] Pen Code, § 121. [17] Pen Code, § 124. [18] Pen Code, § 154A. [19] Pen Code, § 154B. [20] Pen Code, § 499. [21] Binmile Newletter, https://www.linkedin.com/pulse/rise-deepfake-understanding-its-implications-ethics-mitigation-7cjec/?utm_source=share&utm_medium=member_android&utm_campaign=share_via (last visited Apr. 23, 2024). Post Views: 730 Related Technology Law