model-usage▌
steipete/clawdis · updated Apr 8, 2026
Per-model cost summaries from CodexBar CLI logs for Codex or Claude providers.
- ›Supports two summary modes: \"current\" (most recent daily model with highest cost) and \"all\" (full model breakdown across all logged days)
- ›Accepts input via live CodexBar CLI invocation, JSON file, or stdin; outputs as plain text or formatted JSON
- ›Requires CodexBar CLI installed locally (macOS only via Homebrew; Linux support pending)
- ›Falls back to last entry in modelsUsed when model breakdowns are u
Model usage
Overview
Get per-model usage cost from CodexBar's local cost logs. Supports "current model" (most recent daily entry) or "all models" summaries for Codex or Claude.
TODO: add Linux CLI support guidance once CodexBar CLI install path is documented for Linux.
Quick start
- Fetch cost JSON via CodexBar CLI or pass a JSON file.
- Use the bundled script to summarize by model.
python {baseDir}/scripts/model_usage.py --provider codex --mode current
python {baseDir}/scripts/model_usage.py --provider codex --mode all
python {baseDir}/scripts/model_usage.py --provider claude --mode all --format json --pretty
Current model logic
- Uses the most recent daily row with
modelBreakdowns. - Picks the model with the highest cost in that row.
- Falls back to the last entry in
modelsUsedwhen breakdowns are missing. - Override with
--model <name>when you need a specific model.
Inputs
- Default: runs
codexbar cost --format json --provider <codex|claude>. - File or stdin:
codexbar cost --provider codex --format json > /tmp/cost.json
python {baseDir}/scripts/model_usage.py --input /tmp/cost.json --mode all
cat /tmp/cost.json | python {baseDir}/scripts/model_usage.py --input - --mode current
Output
- Text (default) or JSON (
--format json --pretty). - Values are cost-only per model; tokens are not split by model in CodexBar output.
References
- Read
references/codexbar-cli.mdfor CLI flags and cost JSON fields.
Discussion
Product Hunt–style comments (not star reviews)- No comments yet — start the thread.
Ratings
4.7★★★★★45 reviews- ★★★★★Shikha Mishra· Dec 16, 2024
I recommend model-usage for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.
- ★★★★★Ganesh Mohane· Dec 12, 2024
We added model-usage from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.
- ★★★★★Ren Martinez· Dec 8, 2024
Keeps context tight: model-usage is the kind of skill you can hand to a new teammate without a long onboarding doc.
- ★★★★★Omar Perez· Dec 4, 2024
Solid pick for teams standardizing on skills: model-usage is focused, and the summary matches what you get after install.
- ★★★★★Meera Liu· Nov 27, 2024
model-usage is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.
- ★★★★★Charlotte Srinivasan· Nov 23, 2024
model-usage reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Ren Martin· Nov 23, 2024
Registry listing for model-usage matched our evaluation — installs cleanly and behaves as described in the markdown.
- ★★★★★Sakshi Patil· Nov 3, 2024
model-usage fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.
- ★★★★★Chaitanya Patil· Oct 22, 2024
Registry listing for model-usage matched our evaluation — installs cleanly and behaves as described in the markdown.
- ★★★★★Omar Ndlovu· Oct 18, 2024
Useful defaults in model-usage — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
showing 1-10 of 45