Tech

Not even fairy tales are safe – researchers weaponise bedtime stories to jailbreak AI chatbots and create malware

Share
Share


  • Security researchers have developed a new technique to jailbreak AI chatbots
  • The technique required no prior malware coding knowledge
  • This involved creating a fake scenario to convince the model to craft an attack

Despite having no previous experience in malware coding, Cato CTRL threat intelligence researchers have warned they were able to jailbreak multiple LLMs, including ChatGPT-4o, DeepSeek-R1, DeepSeek-V3, and Microsoft Copilot, using a rather fantastical technique.

The team developed ‘Immersive World’ which uses “narrative engineering to bypass LLM security controls” by creating a “detailed fictional world” to normalize restricted operations and develop a “fully effective” Chrome infostealer. Chrome is the most popular browser in the world, with over 3 billion users, outlining the scale of the risk this attack poses.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
This wild RTX 5080 packs a hidden SSD and wants to become your AI development powerhouse
Tech

This wild RTX 5080 packs a hidden SSD and wants to become your AI development powerhouse

Asus RTX 5080 now doubles as AI hardware and a surprisingly cool...

Flint 3 matches Wi-Fi 7 rivals on specs but undercuts them on price for early adopters
Tech

Flint 3 matches Wi-Fi 7 rivals on specs but undercuts them on price for early adopters

GL.iNet Flint 3 is a powerful Wi-Fi 7 router with 2.5GbE ports...

This 122TB SSD costs ,400, but could shrink data centers and their power bills forever
Tech

This 122TB SSD costs $12,400, but could shrink data centers and their power bills forever

Solidigm’s 122.88TB SSD may not be the fastest, but it wins on...

A new tool predicts when users will reject a new technology
Tech

A new tool predicts when users will reject a new technology

If you can predict that a new technology will not be adopted,...