Tech

Claude AI and other systems could be vulnerable to worrying command prompt injection attacks

Share
Share


  • Security researchers tricked Anthropic’s Claude Computer Use to download and run malware
  • They say that other AI tools could be tricked with prompt injection, too
  • GenAI can be tricked to write, compile, and run malware, as well

In mid-October 2024, Anthropic released Claude Computer Use, an Artificial Intelligence (AI) model allowing Claude to control a device – and researchers have already found a way to abuse it.

Cybersecurity researcher Johann Rehnberger recently described how he was able to abuse Computer Use and get the AI to download and run malware, as well as get it to communicate with its C2 infrastructure, all through prompts.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
You can now fact check anybody’s post in WhatsApp – here’s how
Tech

You can now fact check anybody’s post in WhatsApp – here’s how

Perplexity AI’s new WhatsApp integration offers instant fact-checking without leaving the app...

US asks judge to break up Google’s ad tech business
Tech

US asks judge to break up Google’s ad tech business

Google is facing a demand by the US government to break up...