A surprisingly simple technique can dramatically improve how well artificial intelligence models answer questions: Researchers at Google have discovered that repeating a question twice in the same prompt boosts accuracy by as much as 76 percent for certain tasks.
Carl Franzen reports for VentureBeat that the finding challenges years of increasingly complex methods engineers have developed to optimize AI responses.
The technique works because of how most AI models process language. These systems read text from left to right, which means they cannot look ahead at words they have not yet processed. When a model reads the fifth word in a sentence, it has no knowledge of the sixth word because it has not encountered it yet.
This creates a blind spot. By the time the model finishes reading a long question, it may have lost track of important details from the beginning. Repeating the prompt gives the model a second pass where it can “see” the entire question at once, allowing it to retrieve specific information more accurately.
Google researchers Yaniv Leviathan, Matan Kalman and Yossi Matias tested the approach across seven different AI models, including GPT-4o, Claude and Gemini. They evaluated performance on seven standard benchmarks covering various types of questions and problems.
The results were striking: Prompt repetition won 47 out of 70 head-to-head comparisons against standard single prompts, with zero losses. The technique worked across all major AI systems tested.
The improvement was particularly dramatic for tasks requiring precise retrieval from information within the prompt. The researchers created a test where models received a list of 50 names and had to identify the 25th one. The Gemini 2.0 Flash Lite model scored just 21 percent accuracy with a single prompt. With repetition, accuracy jumped to 97 percent.
One significant advantage of this method is that it does not slow down responses. AI processing happens in two stages. First, the model reads and processes the input prompt in parallel, which is fast. Then it generates the answer one word at a time, which is slower. Repeating the prompt only doubles the work in the fast stage, so users barely notice any difference in response time.
The technique does have limitations: It works best for direct questions that do not require complex reasoning. When researchers tested prompt repetition with chain of thought prompting, where models show their step-by-step thinking, the benefits largely disappeared. The authors suggest this happens because reasoning models already restate questions internally as part of their thinking process.
For enterprise applications, the finding offers practical value. Smaller, faster models using prompt repetition can sometimes match the accuracy of larger, more expensive models. This could allow companies to reduce costs while maintaining performance.
The researchers suggest that future AI systems might automatically repeat prompts behind the scenes before processing them. Until then, anyone struggling to get accurate answers from an AI model might benefit from simply asking the same question twice in one prompt.