Tech

Claude AI and other systems could be vulnerable to worrying command prompt injection attacks

Share
Share


  • Security researchers tricked Anthropic’s Claude Computer Use to download and run malware
  • They say that other AI tools could be tricked with prompt injection, too
  • GenAI can be tricked to write, compile, and run malware, as well

In mid-October 2024, Anthropic released Claude Computer Use, an Artificial Intelligence (AI) model allowing Claude to control a device – and researchers have already found a way to abuse it.

Cybersecurity researcher Johann Rehnberger recently described how he was able to abuse Computer Use and get the AI to download and run malware, as well as get it to communicate with its C2 infrastructure, all through prompts.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Surprisingly enough, it seems some AI agents aren’t quite up to scratch on some basic business tests
Tech

Surprisingly enough, it seems some AI agents aren’t quite up to scratch on some basic business tests

Salesforce research finds single-turn tasks see only 58% success, while multi-turn effectiveness...

Rise in ‘harmful content’ since Meta policy rollbacks: survey
Tech

Rise in ‘harmful content’ since Meta policy rollbacks: survey

Meta ditched third-party fact-checking in the United States in January. Harmful content...

OpenAI wins 0 mn contract with US military
Tech

OpenAI wins $200 mn contract with US military

Credit: Unsplash/CC0 Public Domain The US Department of Defense on Monday awarded...