Tech

Claude AI and other systems could be vulnerable to worrying command prompt injection attacks

Share
Share


  • Security researchers tricked Anthropic’s Claude Computer Use to download and run malware
  • They say that other AI tools could be tricked with prompt injection, too
  • GenAI can be tricked to write, compile, and run malware, as well

In mid-October 2024, Anthropic released Claude Computer Use, an Artificial Intelligence (AI) model allowing Claude to control a device – and researchers have already found a way to abuse it.

Cybersecurity researcher Johann Rehnberger recently described how he was able to abuse Computer Use and get the AI to download and run malware, as well as get it to communicate with its C2 infrastructure, all through prompts.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Simulator optimizes vehicle resources to enable real-time accident prevention in autonomous cars
Tech

Simulator optimizes vehicle resources to enable real-time accident prevention in autonomous cars

Traffic-Cognitive Integrated Network-Computing Load Distribution Simulator. Credit: Traffic-Cognitive Integrated Network-Computing Load Distribution...

Teaching robots to weld by using human expertise could solve UK’s critical welder shortage
Tech

Teaching robots to weld by using human expertise could solve UK’s critical welder shortage

Credit: CC0 Public Domain Robots could be the solution to filling the...

WhatsApp is officially getting ads – and I’m worried it’s a slippery slope from here
Tech

WhatsApp is officially getting ads – and I’m worried it’s a slippery slope from here

WhatsApp is finally getting ads They’ll appear in the Updates tab, integrated...