Microsoft’s chief product officer for responsible AI, Sarah Bird, detailed the company’s strategy for safe AI development in an interview with Financial Times reporter Cristina Criddle. Bird emphasized that while generative AI has transformative potential, artificial general intelligence (AGI) still lacks fundamental capabilities and remains a non-priority for Microsoft. The company focuses instead on augmenting human capabilities through “co-pilot” systems rather than replicating human intelligence.
Bird highlighted recent improvements in addressing AI hallucinations through real-time detection systems and discussed Microsoft’s partnership with OpenAI. The collaboration allows both companies to leverage their respective strengths – OpenAI’s core model development and Microsoft’s application-level safety implementations.
According to Bird, generative AI differs significantly from previous AI technologies due to its ability to understand and communicate in natural language. This capability enables wider accessibility and practical applications across various fields. However, she noted that the technology still struggles with basic concepts and understanding the physical world.
The executive detailed Microsoft’s gradual approach to product releases, emphasizing the importance of thorough testing while maintaining development momentum. The company implements multiple safety layers, including content monitoring, abuse detection, and human oversight in their AI systems.
Regarding bias and fairness, Bird explained that Microsoft evaluates both aggregate data and specific user experiences to ensure equitable performance across different demographics. She also addressed the challenge of deepfakes, describing Microsoft’s multi-layered approach including content credentials and watermarking to combat misuse.