llm-models▌
inference-sh/skills · updated Apr 8, 2026
Access 100+ language models via inference.sh CLI.
LLM Models via OpenRouter
Access 100+ language models via inference.sh CLI.

Quick Start
Requires inference.sh CLI (
infsh). Install instructions
infsh login
# Call Claude Sonnet
infsh app run openrouter/claude-sonnet-45 --input '{"prompt": "Explain quantum computing"}'
Available Models
| Model | App ID | Best For |
|---|---|---|
| Claude Opus 4.5 | openrouter/claude-opus-45 |
Complex reasoning, coding |
| Claude Sonnet 4.5 | openrouter/claude-sonnet-45 |
Balanced performance |
| Claude Haiku 4.5 | openrouter/claude-haiku-45 |
Fast, economical |
| Gemini 3 Pro | openrouter/gemini-3-pro-preview |
Google's latest |
| Kimi K2 Thinking | openrouter/kimi-k2-thinking |
Multi-step reasoning |
| GLM-4.6 | openrouter/glm-46 |
Open-source, coding |
| Intellect 3 | openrouter/intellect-3 |
General purpose |
| Any Model | openrouter/any-model |
Auto-selects best option |
Search LLM Apps
infsh app list --search "openrouter"
infsh app list --search "claude"
Examples
Claude Opus (Best Quality)
infsh app run openrouter/claude-opus-45 --input '{
"prompt": "Write a Python function to detect palindromes with comprehensive tests"
}'
Claude Sonnet (Balanced)
infsh app run openrouter/claude-sonnet-45 --input '{
"prompt": "Summarize the key concepts of machine learning"
}'
Claude Haiku (Fast & Cheap)
infsh app run openrouter/claude-haiku-45 --input '{
"prompt": "Translate this to French: Hello, how are you?"
}'
Kimi K2 (Thinking Agent)
infsh app run openrouter/kimi-k2-thinking --input '{
"prompt": "Plan a step-by-step approach to build a web scraper"
}'
Any Model (Auto-Select)
# Automatically picks the most cost-effective model
infsh app run openrouter/any-model --input '{
"prompt": "What is the capital of France?"
}'
With System Prompt
infsh app sample openrouter/claude-sonnet-45 --save input.json
# Edit input.json:
# {
# "system": "You are a helpful coding assistant",
# "prompt": "How do I read a file in Python?"
# }
infsh app run openrouter/claude-sonnet-45 --input input.json
Use Cases
- Coding: Generate, review, debug code
- Writing: Content, summaries, translations
- Analysis: Data interpretation, research
- Agents: Build AI-powered workflows
- Chat: Conversational interfaces
Related Skills
# Full platform skill (all 150+ apps)
npx skills add inference-sh/skills@infsh-cli
# Web search (combine with LLMs for RAG)
npx skills add inference-sh/skills@web-search
# Image generation
npx skills add inference-sh/skills@ai-image-generation
# Video generation
npx skills add inference-sh/skills@ai-video-generation
Browse all apps: infsh app list
Documentation
- Agents Overview - Building AI agents
- Agent SDK - Programmatic agent control
- Building a Research Agent - LLM + search integration guide
Ratings
4.4★★★★★32 reviews- ★★★★★Aarav Patel· Dec 16, 2024
llm-models has been reliable in day-to-day use. Documentation quality is above average for community skills.
- ★★★★★Tariq Flores· Dec 16, 2024
llm-models reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Li Khan· Dec 12, 2024
Solid pick for teams standardizing on skills: llm-models is focused, and the summary matches what you get after install.
- ★★★★★Shikha Mishra· Dec 4, 2024
llm-models reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Yash Thakker· Nov 23, 2024
I recommend llm-models for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.
- ★★★★★Chen Mehta· Nov 11, 2024
llm-models fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.
- ★★★★★Kofi Verma· Nov 7, 2024
I recommend llm-models for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.
- ★★★★★Kofi Menon· Oct 26, 2024
Useful defaults in llm-models — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
- ★★★★★Dhruvi Jain· Oct 14, 2024
Useful defaults in llm-models — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
- ★★★★★Chinedu Liu· Oct 2, 2024
We added llm-models from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.
showing 1-10 of 32