Explore
Settings

Settings

×

Reading Mode

Adjust the reading mode to suit your reading needs.

Font Size

Fix the font size to suit your reading preferences

Language

Select the language of your choice. NewsX reports are available in 11 global languages.
we-woman

OpenAI’s Chatgpt & Google’s Gemini Under A New AI Worm Attack

In the world of AI, there's a new problem: the "zero-click" AI worm. It spreads on its own, putting your data at risk and causing trouble. These sneaky worms mess with platforms like ChatGPT-4 and Gemini by using AI connections. Experts say we need to act fast with tight security and human supervision to keep things safe as AI tech gets better.

OpenAI’s Chatgpt & Google’s Gemini Under A New AI Worm Attack

The burgeoning field of generative AI models is currently experiencing its early stages, yet its innocence is imperiled by the emergence of a new AI worm known as “zero-click” In a controlled test environment, a trio of researchers have developed a menace that has the potential to propagate across systems and potentially steal data or initiate an “adversarial self-replicating prompt” through text and image inputs.

The worm signifies a novel cyber threat that exploits the interconnectedness and autonomy of AI ecosystems. They pose a significant risk to cutting-edge AI platforms like OpenAI’s ChatGPT-4 and Google’s Gemini, as they have the ability to manipulate and exploit these tools. 

According to the research summary, attackers have the ability to embed these prompts into inputs. When these inputs are interpreted by GenAI models, they cause the model to duplicate the input as output (replication) and carry out harmful actions (payload). Furthermore, these inputs force the agent to distribute them (propagate) to other agents by taking advantage of the interconnectedness within the GenAI ecosystem.

The AI worm’s potential harm.

The worm’s capacity to autonomously propagate among AI agents without detection introduces a fresh avenue for cyberattacks, disrupting prevailing security frameworks. The consequences of such a worm extend widely, presenting substantial threats to startups, developers, and technology firms dependent on generative AI systems.

Security specialists and researchers, among them those from the CISPA Helmholtz Center for Information Security, stress the feasibility of these assaults and the pressing requirement for the development community to address these risks earnestly.

This means the worm could be used to conduct phishing attacks, send spam emails, or even spread propaganda, the report suggests. 

The backlog of computer worms 

As reported by Wired, the discoveries indicate that no software or extensive language model is inherently impervious to computer viruses such as malware. A team of researchers from Cornell University, the software company Intuit, and Israel’s Technion developed the worm, named “Morris II.” The name pays homage to one of the earliest self-replicating computer worms, the Morris Worm, which was crafted by Cornell student Robert Morris in 1988.

In the past, the first Morris worm made about 10% of all internet-connected computers crash. Although it wasn’t a huge number of machines at that time, it proved that computer worms can quickly spread between systems without any human help, which is why it’s called a “zero-click worm.

Mitigating worm threats 

Even though AI worms have the potential to cause concern, experts believe that using regular security methods and careful app design can help lessen these dangers.

Adam Swanda, a researcher at AI security company Robust Intelligence, suggests designing apps securely and having humans supervise AI activities.

Making sure AI agents don’t do things without permission can greatly lower the risk of unauthorized actions. Also, keeping an eye out for strange patterns, like repetitive commands in AI systems, can help catch possible problems early.

Even Ben Nassi, a Ph. D. Student at Ben-Gurion University of the Negev (BGU) and a former Google employee along with his team stresses on the importance of awareness among developers and companies creating AI assistants. They emphasize understanding risks and implementing robust security measures to safeguard against exploitation of generative AI systems. Their research calls for prioritizing security in designing and deploying AI ecosystems.

The development of the Morris II worm marks a pivotal moment in cyber threats, revealing vulnerabilities in generative AI systems. As AI becomes more pervasive, comprehensive security strategies are increasingly crucial.

By promoting awareness and proactive security measures, the AI development community can defend against AI worms and ensure the safe use of generative AI technologies.

 


mail logo

Subscribe to receive the day's headlines from NewsX straight in your inbox