Skip to content

Optimize

optimize replays your observed token mix against one or more candidate model prices.

Terminal window
llm-usage optimize <daily|weekly|monthly> --candidate-model <model> [options]
Terminal window
# Compare two OpenAI candidates for monthly usage
llm-usage optimize monthly \
--provider openai \
--candidate-model gpt-4.1 \
--candidate-model gpt-5-codex
# Keep only the cheapest candidate in output
llm-usage optimize weekly \
--provider openai \
--candidate-model gpt-4.1,gpt-5-codex \
--top 1 \
--json
# Generate a monthly share artifact
llm-usage optimize monthly \
--provider openai \
--candidate-model gpt-4.1 \
--candidate-model gpt-5-codex \
--share
  • Uses the same source discovery and filtering pipeline as usage reports.
  • Requires at least one --candidate-model.
  • Enforces a single provider context after filtering.
  • If multiple providers remain, narrow with --provider.
  • --provider is a billing-provider filter and matching stays substring-based, consistent with existing usage commands.
  • Provider aliases are normalized to billing roots (for example, openai-codex is treated as openai).
  • Candidate ranking is deterministic and based on ALL period hypothetical cost.
  • Missing candidate pricing on non-zero token periods marks candidate cost as incomplete.
  • If baseline cost is incomplete, savings fields are omitted.
  • If all billable token buckets are zero but baseline has cost, savings are omitted and a warning is emitted to stderr.
  • --json emits optimize rows only.
  • --markdown emits a Markdown table.
  • Default terminal output renders a titled table.
  • --share is monthly-only and writes optimize-monthly-share.svg to the current working directory.
  • Diagnostics and warnings always go to stderr.