llm-usage-metrics
v0.3.0 on npm
Quantify your local
inference pipeline.
Deterministic parsing for your coding agent sessions. Aggregate token usage, apply
LiteLLM pricing models, and generate actionable reports natively from .pi,
.codex, and OpenCode workflows.
npm install -g llm-usage-metrics
Copied to clipboard
Features
Zero-Config Discovery Engine
Intelligently scans your local file system. Autonomously locates .pi workspaces, .codex log directories, and OpenCode relational databases without requiring manual configuration.
Dynamic Pricing Matrix
Synchronizes strictly with LiteLLM's pricing repository. Supports robust offline-mode utilizing deterministic local caching for isolated environments.
Structured Mutators
Emit formatted JSON for programmatic pipeline ingestion, raw Markdown for documentation, or deterministic ANSI columns.
API Diagnostics
Deterministic arguments for precise reporting outputs.
# Evaluate standard rolling daily window $ llm-usage daily # Evaluate trailing week, enforcing localization bounds $ llm-usage weekly --timezone Europe/Paris # Strict arbitrary bounded period $ llm-usage monthly --since 2026-01-01 --until 2026-01-31
# Isolate telemetry strictly to selected engines $ llm-usage monthly --source pi,codex # Filter downstream pipelines by model family string $ llm-usage monthly --model claude # Filter by upstream provider origin $ llm-usage monthly --provider openai
# Target stdout with unformatted JSON payload $ llm-usage daily --json # Target stdout with Markdown tables $ llm-usage daily --markdown # Default terminal columns with ANSI formatting $ llm-usage daily
# Override source directories for pi and codex $ llm-usage daily --source-dir pi=/var/lib/pi --source-dir codex=/var/log/codex # Use explicit OpenCode SQLite database path $ llm-usage daily --source opencode --opencode-db /mnt/data/opencode.db # Use cached pricing only (offline mode) $ llm-usage monthly --pricing-offline