Google releases multimodal model Gemma 4

robot
Abstract generation in progress

Golden Finance reports that on April 3, Google released its multimodal model Gemma 4. Gemma 4 can be used to process text and image inputs (small models support audio input) and generate text outputs. This version includes open-weight models with both pretraining and instruction tuning. Gemma 4’s context window can hold up to 256,000 tokens and supports 140+ languages. Gemma 4 also uses both a dense architecture and a mixture-of-experts (MoE) architecture, suitable for tasks such as text generation, encoding, and reasoning. These models come in four different sizes: E2B, E4B, 26B A4B, and 31B, and can be deployed in a variety of environments, from phones to laptops and servers.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin