Skip to content

Configuration

llm-usage-metrics is configured through CLI flags and environment variables.

Directory-backed sources:

Terminal window
llm-usage daily --pi-dir /path/to/pi --codex-dir /path/to/codex --gemini-dir /path/to/.gemini --droid-dir /path/to/.factory/sessions

Generic directory override syntax:

Terminal window
llm-usage daily --source-dir pi=/path/to/pi --source-dir codex=/path/to/codex --source-dir gemini=/path/to/.gemini --source-dir droid=/path/to/.factory/sessions

OpenCode uses a dedicated DB override flag:

Terminal window
llm-usage daily --opencode-db /path/to/opencode.db

Notes:

  • --source-dir supports directory-backed sources (pi, codex, gemini, droid).
  • --source-dir opencode=... is rejected; use --opencode-db.
  • OpenCode path precedence is:
    1. --opencode-db
    2. deterministic OS-specific default DB candidates
Terminal window
llm-usage monthly --source pi
llm-usage monthly --source pi,codex
llm-usage monthly --source droid --droid-dir /path/to/.factory/sessions
llm-usage monthly --source opencode --opencode-db /path/to/opencode.db

See Caching for cache-layer behavior details. This section explains what each key does and why you would tune it.

VariableDefaultWhat it controlsWhy/when to change it
LLM_USAGE_SKIP_UPDATE_CHECKunset (false)Skips startup npm update check when truthy (1, true, yes, on)Set in CI/non-interactive pipelines to avoid extra startup work
LLM_USAGE_UPDATE_CACHE_SCOPEglobalUpdate-check cache scope (global or session)Use session to isolate cache by terminal/session
LLM_USAGE_UPDATE_CACHE_SESSION_KEYunsetSession cache key used when scope is sessionSet explicit stable key for terminal tabs/workspaces
LLM_USAGE_UPDATE_CACHE_TTL_MS3600000 (1h)How long update-check cache is considered freshLower for more frequent checks, higher for less network traffic
LLM_USAGE_UPDATE_FETCH_TIMEOUT_MS1000HTTP timeout for update-check requestIncrease on slow networks to reduce timeout-based misses
VariableDefaultWhat it controlsWhy/when to change it
LLM_USAGE_PRICING_CACHE_TTL_MS86400000 (24h)Freshness window for LiteLLM pricing cacheLower for fresher pricing, higher for stable/offline-heavy setups
LLM_USAGE_PRICING_FETCH_TIMEOUT_MS4000HTTP timeout for pricing fetchesIncrease if pricing fetches time out in your environment
VariableDefaultWhat it controlsWhy/when to change it
LLM_USAGE_PARSE_MAX_PARALLEL8Max parallel file parses (1-64)Increase on fast CPUs/disks, lower on constrained systems
LLM_USAGE_PARSE_CACHE_ENABLED1 (true)Enables/disables parse-file cacheDisable for strict cold-run benchmarking/debugging
LLM_USAGE_PARSE_CACHE_TTL_MS604800000 (7d)Parse-cache TTL for unchanged filesLower if you want more frequent full reparsing
LLM_USAGE_PARSE_CACHE_MAX_ENTRIES2000Upper bound for parse-cache entry countRaise for very large history sets; lower to cap disk usage
LLM_USAGE_PARSE_CACHE_MAX_BYTES33554432 (32 MiB)Upper bound for parse-cache file sizeRaise when cache churn is high; lower for tighter disk budgets
LLM_USAGE_PROFILE_RUNTIMEunset (false)Emits source pruning, parse-cache, parse-count, and stage-timing diagnostics on stderrEnable when profiling discovery/parsing/runtime behavior locally

Example:

Terminal window
LLM_USAGE_PARSE_MAX_PARALLEL=16 LLM_USAGE_PRICING_FETCH_TIMEOUT_MS=8000 llm-usage monthly

Benchmarking-focused example (force near-cold parse behavior):

Terminal window
LLM_USAGE_SKIP_UPDATE_CHECK=1 \
LLM_USAGE_PARSE_CACHE_ENABLED=0 \
llm-usage monthly --provider openai --json