Overview
Know what your
AI coding costs.
Local-first usage metrics for every AI coding agent. Token counts, real pricing, Git‑correlated ROI — sub‑second reports.
$
npm i -g llm-usage-metrics $ llm-usage daily
Period Source Models Input Output Total Cost
────────── ─────── ────────────────── ─────── ─────── ─────── ──────
2026-03-02 pi • claude-sonnet-4 142,319 38,104 180,423 $1.57
2026-03-02 codex • claude-sonnet-4 98,712 21,401 120,113 $1.02
2026-03-02 gemini • gemini-2.5-pro 67,241 15,832 83,073 $0.44
2026-03-02 codex • o3 31,049 8,198 39,247 $0.41
────────── ─────── ────────────────── ─────── ─────── ─────── ──────
ALL TOTAL 4 models 339,321 83,535 422,856 $3.44 Blazing fast.
Benchmarked against ccusage on real production data.
4.6×
faster cold start
3.6s vs 16.8s
22×
faster with cache
0.7s vs 17.0s
<1s
cached reports
sub-second
Four commands. Full visibility.
Usage rollups, Git-correlated efficiency, pricing optimization, and daily trend visibility.
Usage reports
Aggregate token counts and costs across all agents. Daily, weekly, or monthly.
llm-usage monthly \
--provider openai Efficiency
Correlate LLM spend with Git outcomes. $/commit, $/1k lines, tokens per commit.
llm-usage efficiency monthly \
--repo-dir . Trends
Track daily cost or token movement over time. Combined view or source-by-source.
llm-usage trends \
--metric tokens Optimize
Replay your token mix against candidate models. Find cheaper alternatives instantly.
llm-usage optimize monthly \
--candidate-model gpt-4.1 Every agent. Zero config.
Auto-discovers session data from five AI coding tools.