- Microsoft has released its 2025 Repsonsible AI Transparency Report
- It outlines its plans to build and maintain responsible AI models
- New regulations are coming in regarding the use of AI, and Microsoft wants to be ready
With AI and Large Language Models (LLMs) increasingly used in many parts of modern life, the trustworthiness and security of these models has become an important consideration for businesses such as Microsoft.
The company has moved to outline its approach to the future of AI in its 2025 Responsible AI Transparency Report, laying out how it sees the future of the technology evolving in years to come.
Just as we have seen AI more broadly adopted by businesses, we have also seen a wave of regulations around the world that aim to establish the safe and responsible use of AI tools and the implementation of AI governance policies that help companies manage the risks associated with AI use.
A hands on approach
In the report, the second following an initial launch in May 2024, Microsoft lays out how it has made significant investments into responsible AI tools, policies, and practices.
These include expanded risk management and mitigation for, “modalities beyond text—like images, audio, and video—and additional support for agentic systems,” as well as taking a “proactive, layered approach” to new regulations such as the EU’s AI Act, supplying customers with materials and resources to empower them to be ready and compliant with incoming requirements.
Consistent risk management, oversight, reviewing, and red-teaming of AI and generative AI releases come alongside continued research and development to ‘inform our understanding of sociotechnical issues related to the latest advancements in AI’, with the company’s AI Frontiers Lab helping Microsoft, “push the frontier of what AI systems can do in terms of capability, efficiency, and safety.”
As AI advances, Microsoft says plans to build more adaptable tools and practices and invest into systems of risk management in order to, “provide tools and practices for the most common risks across deployment scenarios”.
That’s not all though, as Microsoft also plans to deepen its work regarding incoming regulations by supporting effective governance across the AI supply chain.
It says it is also working internally and externally to, “clarify roles and expectations”, as well as continuing with research into, “AI risk measurement and evaluation and the tooling to operationalize it at scale”, sharing advancements with its wider ecosystem to support safer norms and standards.
“Our report highlights new developments related to how we build and deploy AI systems responsibly, how we support our customers and the broader ecosystem, and how we learn and evolve,” noted Teresa Hutson, CVP, Trusted Technology Group and Natasha Crampton, Chief Responsible AI Officer.
“We look forward to hearing your feedback on the progress we have made and opportunities to collaborate on all that is still left to do. Together, we can advance AI governance efficiently and effectively, fostering trust in AI systems at a pace that matches the opportunities ahead.”
Leave a comment