Microsoft markets its Copilot AI assistant as a productivity tool for consumers and businesses alike. However, the product’s terms of use tell a different story. According to these terms, which were last updated in October 2025, “Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.”
After the disclaimer attracted attention on social media, a Microsoft spokesperson told PCMag that the company plans to update what it described as “legacy language.” The spokesperson said the wording “is no longer reflective of how Copilot is used today and will be altered with our next update.”
Microsoft is not alone in adding such warnings. OpenAI cautions users not to treat its output as “a sole source of truth or factual information.” xAI, the company behind Grok, warns that its AI may produce hallucinations, offensive content, or information that does not accurately reflect real people or facts.
Companies routinely add disclaimers to limit their legal liability. Critics argue, however, that AI companies may be downplaying real risks in order to attract paying customers and recover large investments in infrastructure and talent.
The disclaimers point to a genuine technical limitation. Large language models generate text based on statistical patterns, not verified facts. This makes errors possible even when output appears confident and accurate. Experts warn that users are prone to automation bias, a tendency to trust machine-generated results without sufficient scrutiny.
Sources: Tom’s Hardware, TechCrunch
Stay up to date
AI for content creation: the latest tools, tips and trends. Every two weeks in your inbox: