Google has announced a new AI Products and Features, Headlined By Gemini 2.0.
Google announces Gemini 2.0 Flash
The first model in the Gemini 2.0 family, “Flash,” is now available as an experimental version for developers. It builds upon the success of its predecessor, Gemini 1.5 Flash, offering improved performance and faster response times, the company said.
Notably, Gemini 2.0 Flash outperforms even the more advanced Gemini 1.5 Pro on key benchmarks while operating at twice the speed, as per the company.
ALSO: Igbo Muslims, Others Appointed As Abuja National Mosque Imams (FULL LIST)
“In addition to supporting multimodal inputs like images, video and audio, 2.0 Flash now supports multimodal output like natively generated images mixed with text and steerable text-to-speech (TTS) multilingual audio. It can also natively call tools like Google Search, code execution as well as third-party user-defined functions,” said Demis Hassabis, CEO of Google DeepMind.
Gemini 2.0 Flash abilities
Google has listed new capabilities that are coming with Gemini 2.0 Flash AI model, including:
- Multimodal Input and Output: It can process and generate various types of data, including text, images, video, and audio.
- Native Tool Use: It can integrate with tools like Google Search, code execution environments, and user-defined functions.
- Enhanced Reasoning and Understanding: It demonstrates improved abilities in multimodal reasoning, long-context understanding, complex instruction following, and planning.
Developers can access Gemini 2.0 Flash through the Gemini API in Google AI Studio and Vertex AI. A new Multimodal Live API is also being released to enable the creation of dynamic and interactive applications with real-time audio and video streaming capabilities.
When will users get Google Gemini 2.0 Flash
Beyond developer tools, Gemini 2.0 Flash is being integrated into the Gemini app, Google’s AI assistant. Users can experience a chat-optimised version of the model, with plans to expand its availability to other Google products, such as Pixel smartphones, in the near future.
Google DeepMind is also exploring the frontiers of “agentic AI” with research prototypes like Project Astra, Project Mariner, and Jules, showcasing how Gemini 2.0 can enable AI agents to perform tasks, interact with users, and assist with complex activities like coding.