Tech

People are tricking AI chatbots into helping commit crimes

Share
Share


  • Researchers have discovered a “universal jailbreak” for AI chatbots
  • The jailbreak can trick major chatbots into helping commit crimes or other unethical activity
  • Some AI models are now being deliberately designed without ethical constraints, even as calls grow for stronger oversight

I’ve enjoyed testing the boundaries of ChatGPT and other AI chatbots, but while I once was able to get a recipe for napalm by asking for it in the form of a nursery rhyme, it’s been a long time since I’ve been able to get any AI chatbot to even get close to a major ethical line.

But I just may not have been trying hard enough, according to new research that uncovered a so-called universal jailbreak for AI chatbots that obliterates the ethical (not to mention legal) guardrails shaping if and how an AI chatbot responds to queries. The report from Ben Gurion University describes a way of tricking major AI chatbots like ChatGPT, Gemini, and Claude into ignoring their own rules.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Apple has had few incentives in the past to start making iPhones in US
Tech

Apple has had few incentives in the past to start making iPhones in US

People visit an Apple store promoting its iPhone 16 at an outdoor...

This new app says your domain name is the future of phone numbers, but will anyone really switch?
Tech

This new app says your domain name is the future of phone numbers, but will anyone really switch?

Spaceship’s Thunderbolt replaces your phone number with a domain name – if...