local-llm-ops▌
bobmatnyc/claude-mpm-skills · updated Apr 8, 2026
Your localLLM repo provides a full local LLM toolchain on Apple Silicon: setup scripts, a rich CLI chat launcher, benchmarks, and diagnostics. The operational path is: install Ollama, ensure the service is running, initialize the venv, pull models, then launch chat or benchmarks.
Local LLM Ops (Ollama)
Overview
Your localLLM repo provides a full local LLM toolchain on Apple Silicon: setup scripts, a rich CLI chat launcher, benchmarks, and diagnostics. The operational path is: install Ollama, ensure the service is running, initialize the venv, pull models, then launch chat or benchmarks.
Quick Start
./setup_chatbot.sh
./chatllm
If no models are present:
ollama pull mistral
Setup Checklist
- Install Ollama:
brew install ollama - Start the service:
brew services start ollama - Run setup:
./setup_chatbot.sh - Verify service:
curl http://localhost:11434/api/version
Chat Launchers
./chatllm(primary launcher)./chator./chat.py(alternate launchers)- Aliases:
./install_aliases.shthenllm,llm-code,llm-fast
Task modes:
./chat -t coding -m codellama:70b
./chat -t creative -m llama3.1:70b
./chat -t analytical
Benchmark Workflow
Benchmarks are scripted in scripts/run_benchmarks.sh:
./scripts/run_benchmarks.sh
This runs bench_ollama.py with:
benchmarks/prompts.yamlbenchmarks/models.yaml- Multiple runs and max token limits
Diagnostics
Run the built-in diagnostic script when setup fails:
./diagnose.sh
Common fixes:
- Re-run
./setup_chatbot.sh - Ensure
ollamais in PATH - Pull at least one model:
ollama pull mistral
Operational Notes
- Virtualenv lives in
.venv - Chat configs and sessions live under
~/.localllm/ - Ollama API runs at
http://localhost:11434
Related Skills
toolchains/universal/infrastructure/docker
Discussion
Product Hunt–style comments (not star reviews)- No comments yet — start the thread.
Ratings
4.6★★★★★59 reviews- ★★★★★Aanya Menon· Dec 28, 2024
local-llm-ops reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Advait Menon· Dec 20, 2024
local-llm-ops is among the better-maintained entries we tried; worth keeping pinned for repeat workflows.
- ★★★★★Sophia Jackson· Dec 20, 2024
I recommend local-llm-ops for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.
- ★★★★★Aisha Gill· Dec 12, 2024
Keeps context tight: local-llm-ops is the kind of skill you can hand to a new teammate without a long onboarding doc.
- ★★★★★Anaya Malhotra· Nov 11, 2024
local-llm-ops fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.
- ★★★★★Aanya Anderson· Nov 11, 2024
Solid pick for teams standardizing on skills: local-llm-ops is focused, and the summary matches what you get after install.
- ★★★★★William Khanna· Nov 7, 2024
Registry listing for local-llm-ops matched our evaluation — installs cleanly and behaves as described in the markdown.
- ★★★★★Naina Ghosh· Nov 3, 2024
We added local-llm-ops from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.
- ★★★★★Kaira Chen· Oct 26, 2024
local-llm-ops reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Anaya Diallo· Oct 22, 2024
local-llm-ops fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.
showing 1-10 of 59