Skip to main content
notewise routes all LLM calls through LiteLLM, which means any model LiteLLM supports can be used with the --model flag or the DEFAULT_MODEL config key.

Model string format

provider/model-name
gemini/gemini-2.5-flash
openai/gpt-4o
anthropic/claude-3-5-sonnet-20241022
groq/llama3-70b-8192
Some models can be specified without a provider prefix when the name is unambiguous:
gpt-4o                        # routes to OpenAI
claude-3-5-sonnet-20241022    # routes to Anthropic

Supported providers

Google Gemini

Config key: GEMINI_API_KEYModel: gemini/gemini-2.5-flashFree tier via AI Studio. This is the default provider — no billing required to start.

OpenAI

Config key: OPENAI_API_KEYModel: openai/gpt-4oAlso supports o1, o3, o4 series models.

Anthropic

Config key: ANTHROPIC_API_KEYModel: anthropic/claude-3-5-sonnet-20241022Full Claude 3 family supported.

Groq

Config key: GROQ_API_KEYModel: groq/llama3-70b-8192Free tier available. Extremely fast inference.

xAI

Config key: XAI_API_KEYModel: xai/grok-2

Mistral

Config key: MISTRAL_API_KEYModel: mistral/mistral-large-latest

Cohere

Config key: COHERE_API_KEYModel: command-r-plus

DeepSeek

Config key: DEEPSEEK_API_KEYModel: deepseek/deepseek-chat

Provider routing table

When you specify a model string, notewise determines which API key is needed via AppSettings.get_api_key_name_for_model():
  1. If the string contains /, the prefix before the slash is the provider.
  2. Otherwise, heuristics based on model name prefixes are used (gpt → OpenAI, claude → Anthropic).
  3. The resolved API key env var is checked. If the key is missing, the pipeline logs an error and the video fails.
Model prefix / provider slugRequired env var
gemini/, vertex/, vertex_ai/GEMINI_API_KEY
openai/, gpt, o1, o3, o4OPENAI_API_KEY
anthropic/, claudeANTHROPIC_API_KEY
groq/GROQ_API_KEY
xai/, grokXAI_API_KEY
mistral/MISTRAL_API_KEY
cohere/, commandCOHERE_API_KEY
deepseek/DEEPSEEK_API_KEY
Unsupported gateway prefixes (azure, openrouter, vercel_ai_gateway) return None for the API key — these are not natively supported and must be configured via environment variables separately.

Changing the default model

Edit ~/.notewise/config.env:
DEFAULT_MODEL=gpt-4o
OPENAI_API_KEY=sk-...
Or override for a single run:
notewise process "URL" --model gpt-4o

Usage tracking

After each video, notewise records token usage and estimated cost in SQLite.
notewise stats                                       # all-time totals
notewise stats --model gemini/gemini-2.5-flash       # filter by model
notewise stats --since 7d                            # last 7 days
Cost estimates come from LiteLLM’s completion_cost() and are approximate.