Explore
Settings

Settings

×

Reading Mode

Adjust the reading mode to suit your reading needs.

Font Size

Fix the font size to suit your reading preferences

Language

Select the language of your choice. NewsX reports are available in 11 global languages.
we-woman
Advertisement

Google Reinforces AI With Gemini 2.0 Model, Deep Research Mode, And More

These upgrades make Gemini 2.0 more versatile and dynamic in various use cases, from creative work to complex data analysis.

Google Reinforces AI With Gemini 2.0 Model, Deep Research Mode, And More

Google has launched its second-generation AI model, Gemini 2.0, showcasing a significant leap in artificial intelligence capabilities. Following the success of Gemini 1.0, which focused on organizing and understanding information, Gemini 2.0 takes it a step further by making AI more useful in real-world applications. With advancements in multimodality and new features like Deep Research, Gemini 2.0 is set to transform how AI interacts with users, businesses, and developers.

Multimodal Functionality
Gemini 2.0 introduces exciting new multimodal features that allow the AI to handle images, audio, and video, as well as produce native content like generated images and multilingual text-to-speech. These upgrades make Gemini 2.0 more versatile and dynamic in various use cases, from creative work to complex data analysis.

Advanced AI Agents
Gemini 2.0 also brings to life AI agents capable of complex reasoning and enhanced capabilities. Notable prototypes include Project Astra, Project Mariner, and Jules, an AI-powered coding assistant. These agents are designed to handle advanced tasks with ease, enabling smoother human-AI collaboration.

Tools and Integrations
The new model integrates a variety of tools for users, such as Google Search, custom functions, and the ability to execute code. These tools allow Gemini 2.0 to respond more effectively to user prompts and carry out tasks autonomously. Developers can experiment with these features using the Gemini API available through Google AI Studio and Vertex AI.

Unlocking Research Power
A standout feature of Gemini 2.0 is the new Deep Research mode. This mode acts as a research assistant, capable of diving deep into complex topics, compiling data, and offering in-depth reports. Deep Research works by generating a multi-step research plan that users can review and adjust. After receiving approval, the AI performs iterative searches, refines the data, and assembles a comprehensive report, complete with links to original sources.

This innovative feature will be available to Gemini Advanced subscribers on the web version of Gemini, with mobile app access slated for early 2025.

Google is gradually rolling out Gemini 2.0 to developers and trusted testers, with plans to integrate it into Google products, starting with Google Search. The Gemini 2.0 Flash model, the first publicly available model in the series, offers low latency and enhanced performance, outperforming the Gemini 1.5 Pro model in key benchmarks. This version is already available to developers through the Gemini API and can be accessed via Google AI Studio and Vertex AI.

For all users, Gemini 2.0 Flash is also available through the web version of Gemini AI, and app integration is expected soon. Additionally, Gemini 2.0 has been integrated into Google Search’s AI Overviews feature, which provides AI-generated summaries for search queries, now enhanced with more sophisticated reasoning capabilities, making it ideal for complex topics, advanced math, and coding.

Gemini 2.0: The Future of AI Agents

With the introduction of multimodal reasoning, long-context understanding, and native tool support, Gemini 2.0 Flash sets the stage for a new era of AI agents. These agents promise to redefine how we interact with AI, whether for simple tasks or advanced research, making the experience more intuitive, productive, and interactive.


Filed under

Google AI

mail logo

Subscribe to receive the day's headlines from NewsX straight in your inbox