Google introduces a trio of new AI models named Gemma 2, which emphasize security, transparency, and accessibility. The Gemma 2 2B model is optimized for regular computers, ShieldGemma protects against toxic content, and Gemma Scope allows detailed analysis of model functioning. Google aims to democratize AI and build trust.
Google embarks on a quest for more ethical and transparent artificial intelligence. As part of this effort, it has released a trio of new generative AI models named Gemma 2, which promise higher security, lower hardware requirements, and smoother operation.
Unlike the Gemini models, which Google uses for its own products, the Gemma series is intended for more open use by developers and researchers. Similar to Meta's Llama project, Google is also striving to build trust and collaboration in the AI field.
The first innovation is the Gemma 2 2B model, designed for generating and analyzing text. Its greatest advantage is the ability to run on less powerful devices. This opens the door to a wide range of users. The model is available through platforms like Vertex AI, Kaggle, and Google AI Studio.
The new ShieldGemma introduced a set of safety classifiers whose task is to identify and block harmful content. This includes manifestations of hate, harassment, and sexually explicit materials. In short, ShieldGemma acts as a filter that checks both the input data for the model and its outputs.
The final addition is Gemma Scope, a tool enabling detailed analysis of the Gemma 2 model's functioning. Google describes it as a set of specialized neural networks that unpack the complex information processed by the model into a more comprehensible form.
Researchers can better understand how Gemma 2 identifies patterns, processes data, and generates results. The release of the Gemma 2 models comes shortly after the U.S. Department of Commerce supported open AI models in its report.
These make generative AI accessible to smaller companies, non-profits, and independent developers. Additionally, the report highlighted the need for tools to monitor and regulate these models to minimize potential risks.
Artificial intelligence is no longer science fiction, and we have several clever assistants to choose from. Two of them, ChatGPT and newcomer DeepSeek, are now competing for users' favor. How do they differ? And which one is the best?
Code.org Studio is a popular online tool that offers children (and adults) access to fun and interactive programming lessons. With the help of visual block commands, they can easily create animations, games, and applications, develop logical thinking, and enhance creativity.
Kodu Game Lab is an innovative platform for teaching programming, allowing kids to create their own games using visual block coding. They can experiment with game design, and develop creativity and logical thinking.
Project Stargate is an ambitious initiative aimed at creating infrastructure for artificial intelligence in the USA. The goal is to invest $500 billion over the next four years. Supported by giants like OpenAI, SoftBank, and Microsoft, the project promises thousands of jobs and economic dominance for the USA.
RoboMind is an educational tool designed to teach the basics of programming using a virtual robot. It uses the simple Robo programming language, which is an ideal choice for beginners. Students learn algorithmic thinking through practical tasks such as navigating mazes or manipulating objects.
Sam Altman, CEO of OpenAI, announced that the company already knows how to create general artificial intelligence and is aiming for the development of superintelligence. According to his prediction, it could become a reality in just a few years. Although current AI systems still have significant shortcomings, Altman believes in their rapid overcoming.