Amazon Web Services (AWS) is implementing automated reasoning technology to help prevent AI models from generating false information, according to a Wall Street Journal article by Belle Lin. The technology aims to provide mathematical proof that AI responses are accurate within specific domains. AWS’s new tool, called Automated Reasoning Checks, requires customers to establish definitive truth policies against which AI outputs can be verified. The technology builds on symbolic AI, a field rooted in ancient mathematical logic that uses rule-based decision-making rather than pattern recognition.
Byron Cook, AWS Vice President and Distinguished Scientist, explains that while the tool cannot completely eliminate hallucinations, it can significantly reduce them in areas with clearly defined rules. Amazon has invested heavily in this approach, hiring hundreds of experts in automated reasoning over the past decade.
PricewaterhouseCoopers is already using the tool to ensure compliance in regulated industries like pharmaceuticals. However, Forrester analyst Rowan Curran emphasizes that automated reasoning should be part of a broader strategy to combat hallucinations, including techniques like retrieval-augmented generation (RAG) and model fine-tuning.
The development comes as businesses remain hesitant to fully trust AI systems due to their tendency to generate incorrect information. While competitors Microsoft and Google also offer hallucination-reduction tools, AWS’s approach with automated reasoning represents a distinct mathematical solution to this persistent challenge in AI deployment.