Explore
Settings

Settings

×

Reading Mode

Adjust the reading mode to suit your reading needs.

Font Size

Fix the font size to suit your reading preferences

Language

Select the language of your choice. NewsX reports are available in 11 global languages.
we-woman
Advertisement

‘AI Itself Is Not Inherently Bad’: Experts Discuss Deepfakes And AI Regulation At Legaly Speaking 3rd Law & Constitution Dialogue

As AI technology continues to evolve rapidly, concerns are growing about its impact on governance, politics, and the judiciary. A recent panel discussion brought together experts to explore the rise of deepfakes, the challenges of regulating AI, and the potential risks and benefits of this powerful technology.

‘AI Itself Is Not Inherently Bad’: Experts Discuss Deepfakes And AI Regulation At Legaly Speaking 3rd Law & Constitution Dialogue

In a panel discussion at 3rd Law & Constitution Dialogue, experts from various fields came together to address the growing concerns about the potential dangers of Artificial Intelligence (AI) in governance, politics, and the judiciary. The conversation focused on AI-driven deepfakes, scams, and the rapid pace of technological evolution that is outpacing current regulatory frameworks. The discussion featured Kartikeya Sharma, Member of Rajya Sabha, Subimal Bhattacharjee, a Cyber Technology Expert; Meghna Bal, Director of Esya Centre and Advisor at Koan; and K. J. Alphons, Former Union Minister and Advocate.

Deepfakes: A Growing Threat to Trust and Security

The conversation began with a discussion on the alarming rise of deepfake technology. Deepfakes, which involve using AI to create highly realistic but fake images, videos, or audio recordings, have become a serious threat in both personal and professional settings. Kartikeya Sharma highlighted the case of a deepfake of Sunil Bharti Mittal used to trick employees into transferring funds to Dubai, and a recent scam involving deepfake videos of Elon Musk in a Bitcoin fraud.

Sharma pointed out the severity of the issue, stating, “I’d say the genie is already out of the bottle. It’s difficult to quantify or predict how it will evolve.” He emphasized that while some regulations have been put in place, such as the European Union’s recent steps, the technology is advancing so quickly that regulatory measures are often too slow to keep up. The challenge is to develop a comprehensive and effective framework that can address the rapidly evolving nature of AI technology, he added.

The Need for Rapid Regulatory Responses

The discussion then shifted to the question of how to address the growing concern of deepfakes and other AI-related issues from a legal perspective. K. J. Alphons took a somewhat radical stance on the issue, suggesting that the rapid advancement of technology is already beyond human control. “I don’t think regulations can keep up because technology evolves so fast,” he said. He further compared the problem to the emergence of new COVID-19 variants, arguing that regulation will always struggle to match the pace of technological advancements.

However, Alphons also acknowledged the potential positive uses of AI. Reflecting on his tenure as a district collector, he noted that with the right tools, AI could have significantly improved his ability to address issues like poverty, healthcare, and education. Despite his concerns about the destructive potential of AI, Alphons believes that with careful handling, AI can also be a force for good in governance and public service.

The Role of AI in Misinformation and Governance

Meghna Bal brought a more measured perspective to the conversation. While she acknowledged the risks associated with deepfakes, she argued that deepfakes are not fundamentally different from other forms of misinformation. Drawing from a Harvard study that surveyed 5,000 respondents, she pointed out that people are just as skeptical of deepfakes as they are of other forms of misinformation, such as fake news or manipulated audio. “The perception that deepfakes are inherently bad or illegal is misleading,” Bal explained. “The issue isn’t new—it’s about misinformation, which has always existed in different forms.”

Bal also noted that deepfakes are not just a threat but also have benign uses. For instance, AI-powered deepfake technology is used in fields like cancer research and for spreading awareness through NGOs. Despite this, the challenges posed by deepfakes, particularly in the political sphere, are undeniable, and regulation is crucial.

Challenges in Regulating Deepfakes and AI

Subimal Bhattacharjee further emphasized the need for regulation, acknowledging the vast scope of AI’s potential for misuse. “Generative AI has emerged much sooner than expected, and its scope for misuse is enormous,” he said. He also noted that while there are attempts to address AI-related issues in India, the country still lacks specific AI legislation. Bhattacharjee pointed out that India could look to the European Union, which has already introduced regulatory measures such as watermarking to combat deepfakes. However, he cautioned that such measures are not foolproof, as deepfakes can easily be manipulated, and detection technology is still in its infancy.

India, like other countries, faces unique challenges in regulating AI. Bhattacharjee suggested that India should not simply follow Europe or the U.S. but should develop its own path, tailored to the country’s specific needs and challenges. He added that India’s lawmakers are beginning to shift away from the mindset of merely replicating Western approaches to AI regulation, a change he sees as promising.

Ownership and Privacy in the Age of AI

One of the key legal questions raised during the discussion was the issue of ownership over one’s voice and likeness in the digital world. In response to a query on whether public figures like Barack Obama still own their likeness after their videos are uploaded online, Meghna Bal explained that Indian courts have upheld the right to privacy over an individual’s likeness, including their voice and image. She pointed to cases involving celebrities such as Anil Kapoor, which have reaffirmed this principle.

However, Bal also pointed out the complexities surrounding deepfake regulation, noting that while watermarking has been proposed as a solution, it is not entirely effective. “Detection technology is not foolproof because deepfakes are specifically designed to bypass detectors,” she said. The solution, according to Bal, lies not only in regulatory measures but also in encouraging companies to understand how deepfake production occurs and to take proactive steps to prevent its misuse.

India’s Path Forward in AI Regulation

The panel also discussed how India could develop a regulatory framework for AI and deepfakes. Kartikeya Sharma emphasized the importance of creating regulations that are specific to India’s context rather than blindly following Western models. “AI itself is not inherently bad; it’s the way it’s used that matters,” Sharma said. He pointed out that AI can be used for both good and harmful purposes, and it is crucial to regulate its misuse without stifling innovation.

Sharma also proposed the idea of setting up a committee for future affairs in India, one that could help the country understand emerging technological challenges before they become full-blown crises. “By understanding these challenges early, we can get ahead of the curve,” he said.

WATCH VIDEO: 

Also Read: Justice Sanjay Kaul Opens Up On The Genocide Of Kashmiri Pandits At Legally Speaking Dialogue | WATCH


mail logo

Subscribe to receive the day's headlines from NewsX straight in your inbox