Deepfakes reveal dark side of AI, call for stringent laws

Deepfakes, which first emerged on the scene in 2019 with fake videos of Meta CEO Mark Zuckerberg and former US House Speaker Nancy Pelosi, are the 21st century’s alternative to Photoshopping --- creating images and videos of celebrities via a form of artificial intelligence (AI) called deep learning. 

Deepfakes reveal dark side of AI, call for stringent laws
Source: IANS

Nishant Arora

New Delhi, Nov 12 (IANS) Deepfakes, which first emerged on the scene in 2019 with fake videos of Meta CEO Mark Zuckerberg and former US House Speaker Nancy Pelosi, are the 21st century’s alternative to Photoshopping --- creating images and videos of celebrities via a form of artificial intelligence (AI) called deep learning. 

If you have seen former US President Barack Obama calling Donald Trump a “complete dipshit”, or Zuckerberg having “total control of billions of people’s stolen data” -- and more recently a deepfake video of actor Rashmika Mandana that went viral on social media – you probably know what deepfake is.

According to experts, the prevalence of deepfakes, which are compelling and AI-generated videos or audio recordings, has witnessed a notable increase in recent times.

According to Sonit Jain, CEO of GajShield Infotech, this surge can be attributed to the growing accessibility of deep fake technology and its application in various domains.

“Deepfakes have found utility in entertainment, political manipulation, and even fraudulent activities. Data protection and privacy laws should be strengthened to limit the collection and use of personal data for deepfake creation without explicit consent,” Jain told IANS.

Deepfakes can be used in phishing attacks, convincing employees to take actions that compromise security.

Abhishek Malhotra, Managing Partner at TMT Law Practice, said that technological advancement comes with a dark side, and unfortunately, this time, the impact is rather nasty.

“Similar experiences were faced by actor Anil Kapoor, and he rightly approached the court of law for resolution. As would be logical in such situations, the court upheld the personal rights of the actor and recognised his right to prevent the abuse and misuse of his reputation and goodwill,” Malhotra said.

In September, the Delhi High Court issued an interim order protecting the personality rights of Kapoor and restraining various entities from misusing his image, name, voice, or other elements of his persona for financial gain without his consent.

Kapoor sought protection of his personality rights, aiming to prevent various entities, including unidentified individuals, from violating his personality rights by using his name, acronym 'AK,' nicknames like 'Lakhan,' 'Mr. India,' 'Majnu Bhai,' and the phrase 'Jhakaas,' as well as his voice and images, for commercial gain without his permission.

“This judgment can be taken as an indication of what regulations in this space can look like. Freedom of speech and expression can never be exercised at the cost of the reputation of others, nor by encroaching into the personal lives of people,” Malhotra told IANS.

Further, since efforts are already underway to crack down on fake news, it can be expected that similar treatment is afforded to AI deepfakes and memes, etc, said experts.

According to them, the Mandana case underscores the need for a legal and regulatory framework to address deepfakes in the country, emphasizing the importance of preserving personality rights and curbing the misuse of AI tools to portray public figures in fictional scenarios.

This emerging scenario may lead to the development of specific laws and regulations governing AI-generated content and memes, potentially impacting online speech and creative expression.

It also raises questions about freedom of expression, especially in the context of memes, as it addresses the legal status of AI-generated content in comparison to human-created content.

Deepfake technology poses a significant threat to the privacy and individual rights of public figures. As seen in the case of Mandanna, it can be used to create convincing fake videos that can potentially harm a person's reputation or even incite legal action.

Deepfake technology can be weaponized to create deceptive content that poses a threat to national security. It can be used to manipulate public sentiment, create forged videos of politicians or leaders, and potentially incite chaos or conflicts, according to experts.

Last week, Union Minister of State for Electronics and IT, Rajeev Chandrasekhar, said that those who find themselves impacted by AI-generated deepfakes should file first information reports (FIRs) at the nearest police stations and avail the remedies provided under the Information Technology (IT) Rules, 2021 and the Indian Penal Code (IPC).

It is a legal obligation for online platforms to prevent the spread of misinformation by any user under the Information Technology (IT) Rules, 2021.

"They are further mandated to remove such content within 36 hours upon receiving a report from either a user or government authority. Failure to comply with this requirement invokes Rule 7, which empowers aggrieved individuals to take platforms to court under the provisions of the Indian Penal Code (IPC)," the minister said.

To address such risks, organisations should invest in cybersecurity measures, employee training, and awareness programmes while implementing monitoring and incident response plans to mitigate the potential security breaches caused by deepfakes, experts advised.

--IANS

na/bg