Gemini CLI
What does a million-token context window make possible that 200k doesn't?
Repo-scale intelligence. Gemini CLI excels at broad analysis — auditing entire codebases, migrating legacy systems, debugging across file boundaries — where context window size is the bottleneck.
When Gemini, When Claude
Choice of tool is a choice of mindset and unit economics.
| Dimension | Gemini CLI | Claude Code |
|---|---|---|
| Context Window | 1M+ tokens (repo-scale) | ~200k tokens (task-scale) |
| Cost | Free tier (1k req/day) | Subscription (Pro/Team) |
| Agentic Drive | Follows the prompt (predictable) | Proactive (has opinions) |
| Native Tools | Google Search, multimodal | Git integration, test runner |
| Best For | Broad audits, migration, UI debug | Feature building, deep reasoning |
Decision: Use both. Gemini for repo-wide analysis and free-tier exploration. Claude for feature implementation and deep reasoning. The config architecture means both read from the same .ai/ source.
Our Config
Gemini CLI reads GEMINI.md at the project root and searches upward from cwd to .git root for additional GEMINI.md files.
GEMINI.md → @-imports .ai/rules/* (same pattern as CLAUDE.md)
~/.gemini/GEMINI.md → global persona and preferences
GEMINI.md Structure
Gemini was the first agent configured with @-imports to .ai/. This became the template for all agents.
# GEMINI.md
[Orientation table — same 5 questions as CLAUDE.md]
## Global Instructions
@.ai/rules/AI.md
@.ai/rules/content-quality.md
@.ai/rules/content-standards.md
@.ai/rules/decision-transparency.md
@.ai/rules/design-checklist.md
@.ai/rules/design-verification.md
@.ai/rules/fact-and-star-architecture.md
@.ai/rules/git-workflow.md
@.ai/rules/matter-first-pages.md
@.ai/rules/mdx-patterns.md
@.ai/rules/page-flow.md
@.ai/rules/skill-execution.md
@.ai/rules/src-pages-gates.md
[Route table — points to .ai/ directories]
Decision: Every .ai/rules/*.md file is @-imported explicitly. Gemini inlines these at prompt assembly time. Same rules, same enforcement frame, different agent.
What Gemini Lacks
Gemini CLI has no equivalent of Claude Code's hooks system. Rules are passive only — no automated PostToolUse validation, no PreToolUse blocking.
| Claude Has | Gemini Equivalent | Gap |
|---|---|---|
| PostToolUse hooks | None | Content validation is manual |
| PreToolUse build blocker | None | Must add constraint to GEMINI.md text |
/commands routing | .gemini/commands/*.toml | Different format, not yet configured |
Auto-memory (MEMORY.md) | /memory add (manual) | Must explicitly save learnings |
| Skills with gates | None | Reference .agents/skills/ by path |
Decision: Accept the gap. Gemini's value is repo-scale analysis, not content editing. The hooks matter most for /docs/ and /meta/ editing — Claude's primary task. When using Gemini for content work, reference .agents/skills/ manually.
Hierarchical Context
Gemini searches for GEMINI.md from cwd upward. Use this for domain-specific context.
| Level | Purpose | Example |
|---|---|---|
Global (~/.gemini/) | Persona, generic preferences | "Always use TypeScript", "Prefer concise answers" |
Project (./) | Architecture, stack, rules | The @-imports to .ai/rules/ |
Domain (./src/sub/) | Specific business logic | Payment rules, API constraints |
Memory Loop
Ephemeral context for session-specific state.
| Command | When |
|---|---|
/memory add <fact> | Capture session-specific state (ports, keys, branch) |
/memory refresh | After manual file edits or branch switches |
/memory show | Audit the prompt before committing to long tasks |
Decision: Gemini's memory is manual, not automatic. For persistent knowledge, edit .ai/ files directly. /memory add is for session-local facts that shouldn't persist.
Antigravity
Google's agent orchestration layer for Gemini CLI.
sudo apt update
sudo apt upgrade antigravity
Context
- Config Architecture — Agent-agnostic setup, decision log
- Claude Code — The deep-reasoning complement
- AI Products — Higher-level product strategy
- Data Flow — Fuel for prediction models
- Clean Architecture — Structure AI can navigate