Tech

Experts warn DeepSeek is 11 times more dangerous than other AI chatbots

Share
Share

DeepSeek’s R1 AI is 11 times more likely to be exploited by cybercriminals than other AI models – whether that’s by producing harmful content or being vulnerable to manipulation.

This is a worrying finding from new research conducted by Enkrypt AI, an AI security and compliance platform. This security warning adds to the ongoing concerns following last week’s data breach that exposed over one million records.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
You can put Google Gemini right on your smartphone home screen – here’s how
Tech

You can put Google Gemini right on your smartphone home screen – here’s how

Google has launched Gemini home screen widgets for Android and iOS devices...

You can now fact check anybody’s post in WhatsApp – here’s how
Tech

You can now fact check anybody’s post in WhatsApp – here’s how

Perplexity AI’s new WhatsApp integration offers instant fact-checking without leaving the app...

US asks judge to break up Google’s ad tech business
Tech

US asks judge to break up Google’s ad tech business

Google is facing a demand by the US government to break up...