Gemma logo

Gemma

Google DeepMind's family of open-weight foundation models — derived from the same research as Gemini, available in sizes from 2B to 27B for local and cloud deployment

Free to download and run; available via Google AI Studio, Vertex AI, and Hugging Face

Visit Tool

Overview

Gemma is Google's family of lightweight, open-weight language models built on the same research and technology as Gemini. Designed to run efficiently on consumer hardware — including laptops, phones, and edge devices — Gemma models are popular for local AI applications, fine-tuning experiments, and privacy-sensitive deployments.

Key Features

  • Open weights with permissive license for research and commercial use
  • Model sizes from 2B to 27B for different hardware capability levels
  • Gemma 3: latest iteration with improved instruction following and reasoning
  • ShieldGemma: safety-focused variant for content moderation
  • CodeGemma: fine-tuned for code completion and generation tasks
  • Optimized for efficient inference on CPUs, GPUs, and mobile devices

Pricing: Free and open-source; run locally or via Google Cloud Vertex AI.

Pros

  • Derived from Gemini research — one of the strongest open model foundations available
  • Multiple specialized variants: CodeGemma, PaliGemma, instruction-tuned
  • Runs on consumer hardware — deployable locally without cloud infrastructure
  • Permissive terms with no commercial use restrictions

Cons

  • Significant capability gap compared to closed Gemini Pro and Ultra
  • Fine-tuning and local deployment require ML infrastructure knowledge
  • Llama 3 has a larger community and more fine-tuned variants available

Tags

open-sourcegooglefine-tuningefficientgemini-derivedresearchopen-weighton-devicecodemultimodal

Product Updates

Similar Tools