Microsoft has introduced new security features for AI offerings. The company wants to address growing concerns about AI security, privacy, and reliability. As Michael Nuñez reports for VentureBeat, the “Trustworthy AI” initiative includes confidential inference for Azure OpenAI Services, improved GPU security, and better tools for evaluating AI results. A key feature is the new “correction” feature in Azure AI Content Safety, which is designed to combat “hallucinations.” Microsoft is also expanding “embedded content safety” to perform AI safety checks directly on endpoints. Sarah Bird, a leader in Microsoft’s AI efforts, emphasizes the complexity of the task and the need for continued work on responsible AI development.