Skip to content

llm-usage-metrics

v0.3.0 on npm

Quantify your local
inference pipeline.

Deterministic parsing for your coding agent sessions. Aggregate token usage, apply LiteLLM pricing models, and generate actionable reports natively from .pi, .codex, and OpenCode workflows.

npm install -g llm-usage-metrics
Copied to clipboard
Terminal output from llm-usage-metrics showing token usage and cost breakdown across pi, codex, and opencode sources

Features

Zero-Config Discovery Engine

Intelligently scans your local file system. Autonomously locates .pi workspaces, .codex log directories, and OpenCode relational databases without requiring manual configuration.

Dynamic Pricing Matrix

Synchronizes strictly with LiteLLM's pricing repository. Supports robust offline-mode utilizing deterministic local caching for isolated environments.

Structured Mutators

Emit formatted JSON for programmatic pipeline ingestion, raw Markdown for documentation, or deterministic ANSI columns.

API Diagnostics

Deterministic arguments for precise reporting outputs.

# Evaluate standard rolling daily window
$ llm-usage daily

# Evaluate trailing week, enforcing localization bounds
$ llm-usage weekly --timezone Europe/Paris

# Strict arbitrary bounded period
$ llm-usage monthly --since 2026-01-01 --until 2026-01-31