ai-mldeveloper-tools

Consult LLM

by raine

Consult LLM escalates complex reasoning tasks to advanced models with code context, git diffs, and detailed cost trackin

Escalates complex reasoning tasks to more powerful language models (OpenAI o3, Google Gemini 2.5 Pro, DeepSeek Reasoner) by forwarding markdown prompts with code context and git diffs, returning responses with detailed cost tracking.

github stars

67

Access to GPT-5.2 and other frontier modelsFree quota via Gemini CLI backendReal-time cost estimation

best for

  • / Developers stuck on complex debugging issues
  • / Code reviews requiring advanced reasoning
  • / Escalating tasks when Claude hits limits

capabilities

  • / Query advanced AI models with file context
  • / Include git diffs for code review
  • / Track consultation costs in real-time
  • / Resume multi-turn conversations across requests
  • / Monitor active consultations via TUI dashboard

what it does

Lets Claude consult more powerful AI models (GPT-5, Gemini Pro, DeepSeek) when handling complex reasoning tasks, especially for debugging code issues. Includes detailed cost tracking and multi-turn conversations.

about

Consult LLM is a community-built MCP server published by raine that provides AI assistants with tools and capabilities via the Model Context Protocol. Consult LLM escalates complex reasoning tasks to advanced models with code context, git diffs, and detailed cost trackin It is categorized under ai ml, developer tools.

how to install

You can install Consult LLM in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

license

MIT

Consult LLM is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

readme

consult-llm-mcp

An MCP server that lets Claude Code consult stronger AI models (GPT-5.2, Gemini 3.1 Pro, DeepSeek Reasoner) when Sonnet has you running in circles and you need to bring in the heavy artillery. Supports multi-turn conversations.

> Still getting this error after your fix. Ask gemini
  E5108: Error executing lua: attempt to call method 'child' (a nil value)

⏺ Let me consult Gemini to understand the Neovim treesitter API changes:

⏺ consult-llm:consult_llm (MCP)(prompt: "Neovim plugin broken due to treesitter
                                API changes...", files: ["lua/testonly.lua"],
                                model: "gemini-2.5-pro")
  ⎿ The issue is that iter_matches() can return nil nodes in newer Neovim
    versions. Switch to iter_captures() which never returns nil nodes, and use
    named_child(0) instead of child(0) to get only meaningful syntax nodes.
    … +93 lines (ctrl+r to expand)

⏺ The LLM identified the issue - iter_matches() API changed. Switching to
  iter_captures() approach now.

Quick start · Configuration · Monitor TUI · Changelog

Features

  • Query powerful AI models (GPT-5.2, Gemini 3.1 Pro, DeepSeek Reasoner) with relevant files as context
  • Direct queries with optional file context
  • Include git changes for code review and analysis
  • Comprehensive logging with cost estimation
  • Monitor TUI: Real-time dashboard for watching active consultations
  • Gemini CLI backend: Use the gemini CLI to take advantage of free quota
  • Codex CLI backend: Use the codex CLI for OpenAI models
  • Cursor CLI backend: Use the cursor-agent CLI to route GPT and Gemini models through a single tool
  • Multi-turn conversations: Resume CLI sessions across requests with thread_id
  • Web mode: Copy formatted prompts to clipboard for browser-based LLM services
  • Less is more: Single MCP tool to not clutter the context
<img src="meta/monitor-screenshot.webp" alt="consult-llm-monitor screenshot" width="600">

Quick start

  1. Add to Claude Code (choose one):

    With npx (no install required):

    claude mcp add consult-llm \
      -e OPENAI_API_KEY=your_key \
      -e GEMINI_API_KEY=your_key \
      -- npx -y consult-llm-mcp
    

    With binary (no Node.js required):

    curl -fsSL https://raw.githubusercontent.com/raine/consult-llm-mcp/main/scripts/install.sh | bash
    claude mcp add consult-llm \
      -e OPENAI_API_KEY=your_key \
      -e GEMINI_API_KEY=your_key \
      -- consult-llm-mcp
    

    For global availability across projects, add --scope user.

    <details> <summary>Using multiple API keys or CLI backends</summary>
    claude mcp add consult-llm \
      -e OPENAI_API_KEY=your_openai_key \
      -e GEMINI_API_KEY=your_gemini_key \
      -e DEEPSEEK_API_KEY=your_deepseek_key \
      -e CONSULT_LLM_GEMINI_BACKEND=gemini-cli \
      -- npx -y consult-llm-mcp
    
    </details>
  2. Verify connection with /mcp:

    ❯ 1. consult-llm            ✔ connected
    
  3. Ask a question:

    "Consult Gemini about how to fix the race condition in server.ts"

Example workflows

Some real-world examples. Click to expand.

<details> <summary>Explain the problem, and tell CC to consult a smarter LLM</summary>
  <Some long explanation of the issue here>

  Consult LLM for assistance, use gemini

⏺ I see the issue! The problem is that the frontend is trying to use the hint
  for "isn't" but the server generated a hint for "isn't it" (the full phrase).
  This is a mismatch between how the frontend segments text and how the server
  generates hints.

  Let me consult the LLM to get assistance on this segmentation mismatch issue.

⏺ Write(hint-segmentation-issue.md)
  ⎿  Wrote 53 lines to hint-segmentation-issue.md
     # Hint Segmentation Mismatch Issue
     ## Problem Description
     There's a mismatch between how the frontend segments text for clickable
     words and how the backend generates hints. This causes hint unlocking to
     fail when the segments don't match exactly.
     ## Example Case
     **Original Text**: "Hello! Nice weather, isn't it?"
     … +43 lines (ctrl+r to expand)

⏺ consult-llm:consult_llm (MCP)(model: "gemini-2.5-pro", files: ["/project/hint
                                -segmentation-issue.md","/project/web/src/compo
                                nents/HintsText.tsx","/project/web/src/utils/hi
                                nts.ts","/project/server/src/routes/public/hint
                                s.ts","/project/server/src/llm/hints.ts"])
  ⎿ An excellent analysis of the problem. The root cause has been correctly
    identified: a race condition within the `handleConfirmGenerateHints`
    function in `HintsText.tsx`. The component state updates in the background
    after `await generateHints()`, but the function continues executing with a
    stale `activeSegment` variable, causing it to request a hint for a smaller
    text segment ( e.g., "isn't") instead of the full, newly-available hint
    phrase (e.g., "isn't it").
    … +607 lines (ctrl+r to expand)

⏺ Perfect! The LLM identified the core issue - there's a race condition in
  handleConfirmGenerateHints where the function uses stale state after async
  operations. The solution is to re-target the hint key after generation to
  find the correct matching hint.

  Let me implement this solution:

⏺ Update Todos
  ⎿  ☐ Fix hint segmentation mismatch by implementing smarter hint key
       targeting in handleConfirmGenerateHints
</details> <details> <summary>Using web mode to copy prompts to clipboard for browser-based LLMs</summary>
> I need help implementing a -b/--background flag for my Rust CLI. Use consult LLM with web mode

⏺ I'll use web mode to copy the formatted prompt to your clipboard so you can
  paste it into any browser-based LLM service.

⏺ consult-llm:consult_llm (MCP)(prompt: "What is the best approach to implement a
                                 -b/--background flag for the "workmux add" command that
                                 creates a tmux window in the background without switching to
                                 it?

The current implementation automatically switches to
                                 the newly created window via `tmux::select_window()` in the
                                 `setup_environment()` function. I need to make this behavior
                                 conditional based on a flag passed from the
                                 CLI.

Consider:
1. Where should the background flag be
                                 threaded through the call chain?
2. Should I modify the
                                 existing SetupOptions struct or add a parameter?
3. What's
                                 the most maintainable approach given the codebase
                                 structure?
4. Are there any edge cases to consider (e.g.,
                                 should select_pane also be conditional)?", files:
                                 ["src/cli.rs","src/workflow.rs","src/tmux.rs"], web_mode:
                                 true)
  ⎿  ✓ Prompt copied to clipboard!

     Please paste it into your browser-based LLM service and share the response
     here before I proceed with any implementation.

     Note: File paths were included:
       - src/cli.rs
       - src/workflow.rs
       - src/tmux.rs

> [User pastes the response from the external LLM]

⏺ Thank you! Based on the recommendations, I'll now implement the solution...

This is useful when:

  • You want to use a free browser-based LLM service instead of API credits
  • You prefer a specific LLM's web interface
  • You want to review the full prompt before submitting it
</details> <details> <summary>Debugging Neovim plugin treesitter API breakage</summary>
⏺ Fixed! Added nil checks for both node and child and updated the iter_matches call to handle the newer API properly.

> Still not working:

  E5108: Error executing lua /Users/raine/code/testonly.nvim/lua/testonly.lua:92: attempt to call method 'child' (a nil value)
  stack traceback:
          /Users/raine/code/testonly.nvim/lua/testonly.lua:92: in function 'reset_all_exclusive'
          /Users/raine/code/testonly.nvim/lua/testonly.lua:110: in function 'reset_all_exclusive'
          [string ":lua"]:1: in main chunk

  Ask gemini

⏺ Let me consult with a more powerful AI to understand the current Neovim treesitter API changes:

⏺ consult-llm:consult_llm (MCP)(prompt: "I'm debugging a Neovim plugin that's broken due to treesitter API changes. The error is "attempt to call method 'child' (a nil value)" on line 92. The
                               code is trying to access child nodes from treesitter matches but the API seems to have changed in newer Neovim versions.

Here's the problematic
                               code:
```lua
for _, match, _ in parsed_query:iter_matches(root, bufnr, start_row, end_row) do
    for id, node in pairs(match) do
        local name =
                               parsed_query.captures[id]
        if name == "call" and node then
            local child = node:child(0)  -- ERROR HERE
            if child then

                                    local text = vim.treesitter.get_node_text(child, bufnr)
                -- ... rest of logic
            end
        end
    end
end
```

What's the
                               correct way to iterate over treesitter query matches and access child nodes in current Neovim versions? What API changes should I be aware of?", files:
                               ["/Users/raine/code/testonly.nvim/lua/testonly.lua"])
  ⎿  Problem
     -------

   

---

FAQ

What is the Consult LLM MCP server?
Consult LLM is a Model Context Protocol (MCP) server profile on explainx.ai. MCP lets AI hosts (e.g. Claude Desktop, Cursor) call tools and resources through a standard interface; this page summarizes categories, install hints, and community ratings.
How do MCP servers relate to agent skills?
Skills are reusable instruction packages (often SKILL.md); MCP servers expose live capabilities. Teams frequently combine both—skills for workflows, MCP for APIs and data. See explainx.ai/skills and explainx.ai/mcp-servers for parallel directories.
How are reviews shown for Consult LLM?
This profile displays 10 aggregated ratings (sample rows for discoverability plus signed-in user reviews). Average score is about 4.5 out of 5—verify behavior in your own environment before production use.
MCP server reviews

Ratings

4.510 reviews
  • Shikha Mishra· Oct 10, 2024

    Consult LLM is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.

  • Piyush G· Sep 9, 2024

    We evaluated Consult LLM against two servers with overlapping tools; this profile had the clearer scope statement.

  • Chaitanya Patil· Aug 8, 2024

    Useful MCP listing: Consult LLM is the kind of server we cite when onboarding engineers to host + tool permissions.

  • Sakshi Patil· Jul 7, 2024

    Consult LLM reduced integration guesswork — categories and install configs on the listing matched the upstream repo.

  • Ganesh Mohane· Jun 6, 2024

    I recommend Consult LLM for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.

  • Oshnikdeep· May 5, 2024

    Strong directory entry: Consult LLM surfaces stars and publisher context so we could sanity-check maintenance before adopting.

  • Dhruvi Jain· Apr 4, 2024

    Consult LLM has been reliable for tool-calling workflows; the MCP profile page is a good permalink for internal docs.

  • Rahul Santra· Mar 3, 2024

    According to our notes, Consult LLM benefits from clear Model Context Protocol framing — fewer ambiguous “AI plugin” claims.

  • Pratham Ware· Feb 2, 2024

    We wired Consult LLM into a staging workspace; the listing’s GitHub and npm pointers saved time versus hunting across READMEs.

  • Yash Thakker· Jan 1, 2024

    Consult LLM is a well-scoped MCP server in the explainx.ai directory — install snippets and categories matched our Claude Code setup.