Gemma 3

Native Multimodal. Open Weights. Unmatched Efficiency.

1M+
Context Window
4
Model Sizes
MoE
Architecture

The Evolution of Open Intelligence

The Gemma 3 series represents a paradigm shift in the open model landscape. Built on the same research and technology used to create the Gemini models, Gemma 3 introduces native multimodal capabilities directly into open weights. Unlike previous generations that relied on separate vision encoders, Gemma 3 fuses text, image, and audio understanding into a single efficient architecture, available in sizes ranging from edge-deployable 1B parameters to cloud-ready 27B Mixture-of-Experts (MoE) models.

The Gemma 3 Family

A diverse lineup designed for every deployment scenario, from IoT devices to enterprise workstations. The chart below illustrates the relationship between parameter count (Size), benchmark performance (Accuracy), and inference efficiency (Bubble Size).

1B (Nano)

On-device mobile tasks & IoT.

4B (Micro)

Consumer laptop inference.

12B (Standard)

General purpose reasoning.

27B (MoE)

Complex coding & research.

State-of-the-Art Benchmarks

Gemma 3 outperforms its predecessors and comparable open models across critical domains. The most significant leap is in MMMU (Multimodal Multi-discipline Understanding), reflecting its native ability to process complex visual and textual data simultaneously.

  • MMLU: Measures massive multitask language understanding.
  • MathVista: Visual mathematical reasoning.
  • HumanEval: Coding capability in Python.

Under the Hood: Native Multimodal MoE

Gemma 3 moves away from "bolted-on" vision adapters. Instead, it utilizes a unified transformer backbone where text, image, and audio tokens are processed in a shared embedding space, routed through specialized Expert layers.

📥

Multimodal Input

Text, Images, Audio

Tokenize
🧠

Unified MoE Backbone

Sparse Mixture of Experts

Expert A (Logic)
Expert B (Vision)
Expert C (Code)
Expert D (Audio)
📤

Generative Output

Rich Text & Structured Data

> Analysis complete.
> Confidence: 99%

Holistic Evaluation

While many models specialize in one area, Gemma 3 aims for balance. The radar chart demonstrates its versatility. Note the exceptional performance in Reasoning and Coding, areas typically reserved for closed proprietary models.

Reasoning (92/100)

Advanced logical deduction and chain-of-thought processing.

Coding (88/100)

Proficiency in Python, JavaScript, C++, and Rust.

Built Responsibly

Gemma 3 incorporates the "ShieldGemma" safety layer, ensuring robust protection against adversarial inputs without compromising utility.

🛡️

ShieldGemma Integration

Real-time content filtering for both input prompts and model outputs, trained on adversarial datasets.

🔍

Transparency & Eval

Full model cards and evaluation benchmarks provided openly to the community for independent verification.

🤝

Community License

Permissive commercial usage terms allowing developers to build and monetize applications freely.