Google introduces a trio of new AI models named Gemma 2, which emphasize security, transparency, and accessibility. The Gemma 2 2B model is optimized for regular computers, ShieldGemma protects against toxic content, and Gemma Scope allows detailed analysis of model functioning. Google aims to democratize AI and build trust.
Google embarks on a quest for more ethical and transparent artificial intelligence. As part of this effort, it has released a trio of new generative AI models named Gemma 2, which promise higher security, lower hardware requirements, and smoother operation.
Unlike the Gemini models, which Google uses for its own products, the Gemma series is intended for more open use by developers and researchers. Similar to Meta's Llama project, Google is also striving to build trust and collaboration in the AI field.
The first innovation is the Gemma 2 2B model, designed for generating and analyzing text. Its greatest advantage is the ability to run on less powerful devices. This opens the door to a wide range of users. The model is available through platforms like Vertex AI, Kaggle, and Google AI Studio.
The new ShieldGemma introduced a set of safety classifiers whose task is to identify and block harmful content. This includes manifestations of hate, harassment, and sexually explicit materials. In short, ShieldGemma acts as a filter that checks both the input data for the model and its outputs.
The final addition is Gemma Scope, a tool enabling detailed analysis of the Gemma 2 model's functioning. Google describes it as a set of specialized neural networks that unpack the complex information processed by the model into a more comprehensible form.
Researchers can better understand how Gemma 2 identifies patterns, processes data, and generates results. The release of the Gemma 2 models comes shortly after the U.S. Department of Commerce supported open AI models in its report.
These make generative AI accessible to smaller companies, non-profits, and independent developers. Additionally, the report highlighted the need for tools to monitor and regulate these models to minimize potential risks.
The lack of computational power hinders the development of artificial intelligence. OpenAI faces challenges in developing new products such as an improved ChatGPT with visual recognition and new versions of DALL-E and Sora. The increasing complexity of AI models requires enormous computational capacity, which slows down innovation and the launch of new features on the market.
Streaming platforms are experiencing a massive boom, changing the way gamers and viewers perceive the gaming world. With the rise of cloud gaming, augmented and virtual reality, and ever-expanding monetization opportunities, gaming and streaming are becoming more than just a hobby. Discover what lies ahead for game streaming in the coming years.
Are you deciding between cloud and local storage for data backup? We compared both solutions, their advantages and disadvantages, and practical tips on securing your data. Learn which option fits your needs and how to set up a backup system that protects your important files from loss and cyber threats.
The router is the key to protecting your home network. That's why it is targeted by most hackers, who can not only access your sensitive data and files but also use it for further attacks. Find out how to protect yourself from them.
Anonymity on the internet attracts many users, but is it really achievable? Discover the power of tools like VPN and Tor, which promise invisibility in the online world. Our guide will show you how to minimize your digital footprint and navigate the cyber space safely.
Google has partnered with Kairos Power and plans to power its data centers with small modular reactors. The aim is to secure 500 MW of zero-emission energy by 2030. The ambitious plan faces both technical and societal challenges.