Categories: Top News

OpenAI’s Chatgpt & Google’s Gemini Under A New AI Worm Attack

The burgeoning field of generative AI models is currently experiencing its early stages, yet its innocence is imperiled by the emergence of a new AI worm known as “zero-click” In a controlled test environment, a trio of researchers have developed a menace that has the potential to propagate across systems and potentially steal data or initiate an “adversarial self-replicating prompt” through text and image inputs.

Advertisement · Scroll to continue

The worm signifies a novel cyber threat that exploits the interconnectedness and autonomy of AI ecosystems. They pose a significant risk to cutting-edge AI platforms like OpenAI’s ChatGPT-4 and Google’s Gemini, as they have the ability to manipulate and exploit these tools. 

According to the research summary, attackers have the ability to embed these prompts into inputs. When these inputs are interpreted by GenAI models, they cause the model to duplicate the input as output (replication) and carry out harmful actions (payload). Furthermore, these inputs force the agent to distribute them (propagate) to other agents by taking advantage of the interconnectedness within the GenAI ecosystem.

The AI worm’s potential harm.

The worm’s capacity to autonomously propagate among AI agents without detection introduces a fresh avenue for cyberattacks, disrupting prevailing security frameworks. The consequences of such a worm extend widely, presenting substantial threats to startups, developers, and technology firms dependent on generative AI systems.

Security specialists and researchers, among them those from the CISPA Helmholtz Center for Information Security, stress the feasibility of these assaults and the pressing requirement for the development community to address these risks earnestly.

This means the worm could be used to conduct phishing attacks, send spam emails, or even spread propaganda, the report suggests. 

The backlog of computer worms

As reported by Wired, the discoveries indicate that no software or extensive language model is inherently impervious to computer viruses such as malware. A team of researchers from Cornell University, the software company Intuit, and Israel’s Technion developed the worm, named “Morris II.” The name pays homage to one of the earliest self-replicating computer worms, the Morris Worm, which was crafted by Cornell student Robert Morris in 1988.

In the past, the first Morris worm made about 10% of all internet-connected computers crash. Although it wasn’t a huge number of machines at that time, it proved that computer worms can quickly spread between systems without any human help, which is why it’s called a “zero-click worm.

Mitigating worm threats

Even though AI worms have the potential to cause concern, experts believe that using regular security methods and careful app design can help lessen these dangers.

Adam Swanda, a researcher at AI security company Robust Intelligence, suggests designing apps securely and having humans supervise AI activities.

Making sure AI agents don’t do things without permission can greatly lower the risk of unauthorized actions. Also, keeping an eye out for strange patterns, like repetitive commands in AI systems, can help catch possible problems early.

Even Ben Nassi, a Ph. D. Student at Ben-Gurion University of the Negev (BGU) and a former Google employee along with his team stresses on the importance of awareness among developers and companies creating AI assistants. They emphasize understanding risks and implementing robust security measures to safeguard against exploitation of generative AI systems. Their research calls for prioritizing security in designing and deploying AI ecosystems.

The development of the Morris II worm marks a pivotal moment in cyber threats, revealing vulnerabilities in generative AI systems. As AI becomes more pervasive, comprehensive security strategies are increasingly crucial.

By promoting awareness and proactive security measures, the AI development community can defend against AI worms and ensure the safe use of generative AI technologies.

 

Poulami Mukherjee

Recent Posts

Jyotika Reveals Diet & Fitness Secrets Behind Her Stunning Transformation-Inspired By This Bollywood Star!

Highlighting the significance of weight training, Jyotika emphasized its role in maintaining strength and independence,…

10 minutes ago

Who Is Ekrem Imamoglu, Why Was Turkey Mayor Arrested?

Imamoglu, a 53-year-old businessman and former district mayor, rose to prominence in 2019 when he…

10 minutes ago

Tamim Iqbal Heart Attack: Former Bangladesh Captain Hospitalized Again After Second Health Scare During DPL Match, Video Goes Viral

Bangladesh cricket was shaken as former captain Tamim Iqbal collapsed on the field during a…

12 minutes ago

South Korea Battles Devastating Wildfires Amid Strong Winds; Four Dead, Thousands Evacuated

According to Yonhap News Agency, the fire began Friday in Sancheong County, 250 kilometres southeast…

19 minutes ago

Bank Strike Alert: Here Is What You Need To Know About Bank Closures On These Days Of March 2025

The Strike was proposed by UFBU. The United Forum of Bank Unions (UFBU) is an…

29 minutes ago

Donald Trump Supporter’s Wife Detained by ICE After Honeymoon, He Says, ‘Don’t Regret The Vote’

His wife, a Peruvian citizen, was detained by U.S. Immigration and Customs Enforcement (ICE) upon…

36 minutes ago