Google has introduced two upgraded versions of its autonomous research agents, aiming to turn AI-powered analysis into a core tool for enterprise workflows. The new systems, called Deep Research and Deep Research Max, are built on the Gemini 3.1 Pro model and target tasks ranging from fast summaries to complex, multi-step investigations.
The official blog post states that the update marks a shift from simple summarization toward “long-horizon research workflows” that combine web data with proprietary sources. With a single API call, developers can now trigger research processes that gather, analyze and synthesize information into fully cited reports.
Google positions the two agents for different use cases. Deep Research focuses on speed and efficiency. It is designed for interactive applications where users need quick answers with lower latency and cost. Deep Research Max, by contrast, is built for depth and accuracy. It uses extended compute time to iteratively refine its findings, making it suitable for background tasks such as overnight report generation.
According to Google, the Max version shows improved performance on benchmarks that measure retrieval and reasoning. The company says the system consults more sources and better evaluates conflicting information, producing more nuanced results.
A key addition is support for the Model Context Protocol, or MCP. This allows companies to connect the agent to internal databases and external professional data services. Instead of relying only on public web content, the system can access financial records, market data or internal documents in a controlled way.
The agents also introduce native visual outputs. They can generate charts and infographics directly within reports, helping users interpret complex datasets without additional tools. Google says this feature turns raw data into “presentation-ready” material.
Other features focus on transparency and control. Users can review and adjust the research plan before execution. They can also combine multiple tools such as search, file access and code execution, or restrict the system to private data only. Real-time streaming provides a view into intermediate steps, allowing users to follow how conclusions are formed.
The system supports multimodal inputs including PDFs, spreadsheets, images and audio. This enables broader context gathering across different formats, which is often required in fields such as finance or life sciences.
Google is working with partners including FactSet, S&P and PitchBook to integrate specialized data into these workflows. The company argues that such integrations can improve productivity by speeding up research tasks that traditionally require manual effort.
Deep Research and Deep Research Max are available in public preview through the Gemini API. Google plans to expand access to enterprise customers through its cloud platform.
Stay up to date
AI for content creation: the latest tools, tips and trends. Every two weeks in your inbox: