Skip to content

Backends

yaku supports four translation backends. Each sends your text to a different LLM provider.

BackendDefault modelStreamingNotes
hostedServer-sideNoZero-config default. Refined per-language prompts.
geminigemini-2.5-flashYesFree API key from Google AI Studio.
openaigpt-4o-miniYesWorks with any OpenAI-compatible API (Groq, Together, DeepSeek, Ollama).
anthropicclaude-haiku-4-5-20251001YesVia the Anthropic SDK.

All backends use a temperature of 0.3 for consistent, low-variance translations. With --verbose, yaku reports the actual model version used by the API (which may differ from the alias you specified).

The backend is resolved in this order:

  1. --backend flag — always wins if specified.
  2. Config backend field — used if no flag is passed.
  3. Defaulthosted. Having an API key does not change the default. You must explicitly choose a local backend.
Terminal window
# No config at all → uses hosted (the default)
yaku --to en "Bonjour"
# API key set, but no backend configured → still uses hosted
yaku config set api-key AIza...
yaku --to en "Bonjour"
# Explicitly select a local backend to use your API key
yaku config set backend gemini
yaku --to en "Bonjour"
# Override per-command with --backend flag
yaku --backend openai --to en "Bonjour"

The default backend. Uses refined, per-language prompts with higher translation quality than local backends. See Hosted Service & Plans for a comparison, plan tiers, and quota limits.

Terminal window
# Use the hosted backend (always the default)
yaku --to en "こんにちは"

Customize the hosted endpoint:

Terminal window
yaku config set hosted-url https://api.staging.yakulang.com

Uses Google’s Gemini API. Get a free API key from Google AI Studio. yaku disables Gemini’s thinking mode to reduce latency and token cost — translation is instruction-following, not a reasoning task.

Terminal window
# Configure
yaku config set api-key YOUR_GEMINI_API_KEY
# Or use environment variable
export GOOGLE_API_KEY=YOUR_GEMINI_API_KEY

Override the model:

Terminal window
yaku --model gemini-2.5-pro --to en "Bonjour"

Works with OpenAI’s API and any OpenAI-compatible provider.

Terminal window
# OpenAI
yaku config set backend openai
yaku config set api-key YOUR_OPENAI_API_KEY
# Or use environment variable
export OPENAI_API_KEY=YOUR_OPENAI_API_KEY
yaku --backend openai --to en "Bonjour"

Use --api-base to point to any compatible endpoint:

Terminal window
# Groq
yaku --backend openai \
--api-base https://api.groq.com/openai/v1 \
--model llama-3.3-70b-versatile \
--to en "Bonjour"
# Together.ai
yaku --backend openai \
--api-base https://api.together.xyz/v1 \
--model meta-llama/Llama-3-70b-chat-hf \
--to en "Bonjour"
# Local Ollama
yaku --backend openai \
--api-base http://localhost:11434/v1 \
--model llama3 \
--to en "Bonjour"
# DeepSeek
yaku --backend openai \
--api-base https://api.deepseek.com/v1 \
--model deepseek-chat \
--to en "Bonjour"

Save as defaults so you don’t repeat flags:

Terminal window
yaku config set backend openai
yaku config set api-base https://api.groq.com/openai/v1
yaku config set model llama-3.3-70b-versatile
yaku config set api-key YOUR_GROQ_API_KEY

Uses Claude models via the Anthropic SDK.

Terminal window
yaku config set backend anthropic
yaku config set api-key YOUR_ANTHROPIC_API_KEY
# Or use environment variable
export ANTHROPIC_API_KEY=YOUR_ANTHROPIC_API_KEY
yaku --backend anthropic --to en "Bonjour"

Override the model:

Terminal window
yaku --backend anthropic --model claude-sonnet-4-5-20250514 --to en "Bonjour"

You can switch backends per-command without changing your config:

Terminal window
# Config says gemini, but use openai for this one
yaku --backend openai --to en "Bonjour"
# Compare outputs from different backends
echo "Bonjour" | yaku --backend gemini --to en
echo "Bonjour" | yaku --backend openai --to en
echo "Bonjour" | yaku --backend anthropic --to en
echo "Bonjour" | yaku --backend hosted --to en