Tech

Watch out AI fans – cybercriminals are using jailbroken Mistral and Grok tools to build powerful new malware

Share
Share


  • AI tools are more popular than ever – but so are the security risks
  • Top tools are being leveraged by cybercriminals with malicious intent
  • Grok and Mixtral were both found being used by crimianls

New research has warned top AI tools are powering ‘WormGPT’ variants, malicious GenAI tools which are generating malicious code, social engineering attacks, and even providing hacking tutorials.

With Large Language Models (LLMs) now widely used alongside tools like Mistral AI’s Mixtral and xAI’s Grok, experts from Cato CTRL found this isn’t always in the way they’re intended to be used.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
New judge’s ruling makes OpenAI keeping a record of all your ChatGPT chats one step closer to reality
Tech

New judge’s ruling makes OpenAI keeping a record of all your ChatGPT chats one step closer to reality

A federal judge rejected a ChatGPT user’s petition against her order that...

How AI could give you 14 more hours a week—but only if you’re already rich enough to buy time
Tech

How AI could give you 14 more hours a week—but only if you’re already rich enough to buy time

AI tools are reshaping how we live, from morning chores to late-night...

New storage platform delivers predictable renewable power regardless of weather conditions
Tech

New storage platform delivers predictable renewable power regardless of weather conditions

Tanks with electrolyte. Credit: Fraunhofer ICT Europe’s largest vanadium redox flow battery—located...