- OpenAI says it has disrupted numerous malicious campaigns using ChatGPT
- These include employment scams and influence campaigns
- Russia, China, and Iran are using ChatGPT to translate and generate content
OpenAI has revealed it has taken down a number of malicious campaigns using its AI offerings, including ChatGPT.
In a report titled, “Disrupting malicious uses of AI: June 2025,” OpenAI lays out how it dismantled or disrupted 10 employment scams, influence operations, and spam campaigns using ChatGPT in the first FEW months of 2025 alone.
Many of the campaigns were conducted by state-sponsored actors with links to China, Russia and Iran.
AI campaign disruption
Four of the campaigns disrupted by OpenAI appear to have originated in China, with their focus on social engineering, covert influence operations, and cyber threats.
One campaign, dubbed “Sneer Review” by OpenAI, saw the Taiwanese “Reversed Front” board game that includes resistance against the Chinese Communist Party spammed by highly critical Chinese comments.
The network behind the campaign then generated an article and posted it on a forum claiming that the game had received widespread backlash based on the critical comments in an effort to discredit both the game and Taiwanese independence.
Another campaign, named “Helgoland Bite”, saw Russian actors using ChatGPT to generate text in German that criticized the US and NATO, and generate content about the German 2025 election.
Most notably, the group also used ChatGPT to seek out opposition activists and bloggers, as well as generating messages that referenced coordinated social media posts and payments.
OpenAI has also banned numerous ChatGPT accounts linked to US targeted influence accounts in an operation known as “Uncle Spam”.
In many cases, Chinese actors would generate highly divisive content aimed at widening the political divide in the US, including creating social media accounts that posted arguments for and against tariffs, as well as generating accounts that mimicked US veteran support pages.
OpenAI’s report is a key reminder that not everything you see online is posted by an actual human being, and that the person you’ve picked an online fight with could be getting exactly what they want; engagement, outrage, and division.
Leave a comment