- OpenAI reportedly boosting both its cyber and physical security
- DeepSeek model could be a distillation of an OpenAI model
- The ChatGPT maker is already funding further AI security research
ChatGPT-maker OpenAI has reportedly intensified its security operations to combat corporate espionage, amid rumors foreign companies could be looking to the AI giant for inspiration.
The move follows Chinese startup DeepSeek’s release of a competing AI model, which reportedly uses distillation to copy OpenAI’s technology.
Distillation is where a third-party transfers knowledge from a large, complex ‘teacher’ model to a smaller, more efficient ‘student’ model, allowing the third-party to create a smaller model with improved inferencing speed.
OpenAI boosts protection against rival AI companies
OpenAI has reportedly introduced new policies to restrict employee access to sensitive projects and discussions, similar to how it handled the development of the o1 model ā according to a TechCrunch report, only pre-approved staff could discuss the o1 model in shared office areas.
Moreover, proprietary technologies are now being kept on offline systems to prevent the chances of a breach, while offices now use fingerprint scans for access to strengthen physical security. Strict network policies also center around a deny-by-default approach, with external connections requiring additional approval.
The reports also indicate that OpenAI has added more personnel to strengthen its cybersecurity teams and to enhance physical security and important sites like its data centers.
Being at the forefront of AI innovation comes with added cost for OpenAI ā its Cybersecurity Grant Program has funded 28 research initiatives that explore the concepts of prompt injection, secure code generation and autonomous cybersecurity defenses, with the company acknowledging that AI has the power to democratize cyberattackers’ access to more sophisticated technologies.
TechRadar Pro has asked OpenAI for more context surrounding the reports, but the company did not respond to our request.
Leave a comment