Gmail’s AI tools criticized as “horseless carriage” approach to AI

Pete Koomen, a prominent tech figure and YC Partner, has published a critique of Gmail’s AI assistant, arguing that it represents an outdated approach to implementing artificial intelligence in applications. In his blog post titled “AI Horseless Carriages,” Koomen explains that AI applications often fall short because developers don’t allow users to customize the system prompts that control how AI models behave.

The article compares current AI implementations to early automobiles that merely replaced horses with engines without fundamentally rethinking vehicle design. According to Koomen, Gmail’s email draft feature is a prime example of this problem. When users ask Gemini to draft an email, it produces formal, generic text that doesn’t match the user’s personal writing style.

“Millions of Gmail users have had this experience and I’m sure many of them have concluded that AI isn’t smart enough to write good emails yet,” Koomen writes. “This could not be further from the truth: Gemini is an astonishingly powerful model that is more than capable of writing good emails. Unfortunately, the Gmail team designed an app that prevents it from doing so.”

Koomen demonstrates that AI models use both system prompts (which define general behavior) and user prompts (which specify particular tasks). He argues that the system prompt should be customizable by users, especially when the AI is acting on their behalf. For example, he shows how creating a personalized “Pete System Prompt” allows the AI to write emails that match his concise, casual style rather than using Google’s formal, business-like default.

The article also suggests that AI is more useful for analyzing and transforming text than for generating it from scratch. Koomen presents a demo of an email-reading assistant that categorizes incoming messages, archives unimportant ones, and drafts replies – potentially saving users significant time compared to Gmail’s current implementation.

“This is what AI’s ‘killer app’ will look like for many of us: teaching a computer how to do things that we don’t like doing so that we can spend our time on things we do,” Koomen explains.

The piece concludes that truly “AI-native” software should maximize a user’s leverage in specific domains rather than simply adding AI features to existing interfaces. Koomen envisions a future where AI agents handle mundane tasks, allowing people to focus on work they find meaningful and important.

Industry observers note that Koomen’s critique highlights a growing tension between developer-controlled AI implementations and user-customizable systems, with implications for how future AI applications might be designed across various sectors.

Related posts:

Stay up-to-date: