Optimize
optimize replays your observed token mix against one or more candidate model prices.
llm-usage optimize <daily|weekly|monthly> --candidate-model <model> [options]Examples
Section titled “Examples”# Compare two OpenAI candidates for monthly usagellm-usage optimize monthly \ --provider openai \ --candidate-model gpt-4.1 \ --candidate-model gpt-5-codex
# Keep only the cheapest candidate in outputllm-usage optimize weekly \ --provider openai \ --candidate-model gpt-4.1,gpt-5-codex \ --top 1 \ --json
# Generate a monthly share artifactllm-usage optimize monthly \ --provider openai \ --candidate-model gpt-4.1 \ --candidate-model gpt-5-codex \ --shareCommand behavior
Section titled “Command behavior”- Uses the same source discovery and filtering pipeline as usage reports.
- Requires at least one
--candidate-model. - Enforces a single provider context after filtering.
- If multiple providers remain, narrow with
--provider. --provideris a billing-provider filter and matching stays substring-based, consistent with existing usage commands.- Provider aliases are normalized to billing roots (for example,
openai-codexis treated asopenai). - Candidate ranking is deterministic and based on
ALLperiod hypothetical cost.
Missing pricing and incomplete baselines
Section titled “Missing pricing and incomplete baselines”- Missing candidate pricing on non-zero token periods marks candidate cost as incomplete.
- If baseline cost is incomplete, savings fields are omitted.
- If all billable token buckets are zero but baseline has cost, savings are omitted and a warning is emitted to
stderr.
Output formats
Section titled “Output formats”--jsonemits optimize rows only.--markdownemits a Markdown table.- Default terminal output renders a titled table.
--shareis monthly-only and writesoptimize-monthly-share.svgto the current working directory.- Diagnostics and warnings always go to
stderr.