// may the 4th be with you⚔️
ai-mldeveloper-tools

Arize Phoenix

by arize-ai

Arize Phoenix — unified interface for managing prompts, exploring datasets, and running LLM experiments across providers

Provides a unified interface to Arize Phoenix's capabilities for managing prompts, exploring datasets, and running experiments across different LLM providers

github stars

8.8K

0 commentsdiscussion

Both formats append explainx.ai attribution and the canonical URL for this MCP server listing.

Multi-provider LLM supportUnified experiment management

best for

  • / ML engineers comparing model performance
  • / AI developers managing prompt workflows
  • / Data scientists evaluating LLM outputs
  • / Teams running A/B tests on AI models

capabilities

  • / Manage prompts and prompt templates
  • / Explore machine learning datasets
  • / Run experiments across multiple LLM providers
  • / Track model performance and metrics
  • / Access Phoenix's evaluation tools
  • / Query experimental results

what it does

Connects to Arize Phoenix for managing AI prompts, exploring ML datasets, and running LLM experiments. Provides a unified interface to work with different language model providers.

about

Arize Phoenix is an official MCP server published by arize-ai that provides AI assistants with tools and capabilities via the Model Context Protocol. Arize Phoenix — unified interface for managing prompts, exploring datasets, and running LLM experiments across providers It is categorized under ai ml, developer tools.

how to install

You can install Arize Phoenix in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

license

NOASSERTION

Arize Phoenix is released under the NOASSERTION license.

readme

phoenix banner

Add Arize Phoenix MCP server to Cursor

Phoenix is an open-source AI observability platform designed for experimentation, evaluation, and troubleshooting. It provides: - [**_Tracing_**](https://arize.com/docs/phoenix/tracing/llm-traces) - Trace your LLM application's runtime using OpenTelemetry-based instrumentation. - [**_Evaluation_**](https://arize.com/docs/phoenix/evaluation/llm-evals) - Leverage LLMs to benchmark your application's performance using response and retrieval evals. - [**_Datasets_**](https://arize.com/docs/phoenix/datasets-and-experiments/overview-datasets) - Create versioned datasets of examples for experimentation, evaluation, and fine-tuning. - [**_Experiments_**](https://arize.com/docs/phoenix/datasets-and-experiments/overview-datasets#experiments) - Track and evaluate changes to prompts, LLMs, and retrieval. - [**_Playground_**](https://arize.com/docs/phoenix/prompt-engineering/overview-prompts)- Optimize prompts, compare models, adjust parameters, and replay traced LLM calls. - [**_Prompt Management_**](https://arize.com/docs/phoenix/prompt-engineering/overview-prompts/prompt-management)- Manage and test prompt changes systematically using version control, tagging, and experimentation. Phoenix is vendor and language agnostic with out-of-the-box support for popular frameworks ([OpenAI Agents SDK](https://arize.com/docs/phoenix/tracing/integrations-tracing/openai-agents-sdk), [Claude Agent SDK](https://arize.com/docs/phoenix/integrations/python/claude-agent-sdk), [LangGraph](https://arize.com/docs/phoenix/tracing/integrations-tracing/langchain), [Vercel AI SDK](https://arize.com/docs/phoenix/tracing/integrations-tracing/vercel-ai-sdk), [Mastra](https://arize.com/docs/phoenix/integrations/typescript/mastra), [CrewAI](https://arize.com/docs/phoenix/tracing/integrations-tracing/crewai), [LlamaIndex](https://arize.com/docs/phoenix/tracing/integrations-tracing/llamaindex), [DSPy](https://arize.com/docs/phoenix/tracing/integrations-tracing/dspy)) and LLM providers ([OpenAI](https://arize.com/docs/phoenix/tracing/integrations-tracing/openai), [Anthropic](https://arize.com/docs/phoenix/tracing/integrations-tracing/anthropic), [Google GenAI](https://arize.com/docs/phoenix/tracing/integrations-tracing/google-genai), [Google ADK](https://arize.com/docs/phoenix/integrations/llm-providers/google-gen-ai/google-adk-tracing), [AWS Bedrock](https://arize.com/docs/phoenix/tracing/integrations-tracing/bedrock), [OpenRouter](https://arize.com/docs/phoenix/integrations/python/openrouter), [LiteLLM](https://arize.com/docs/phoenix/tracing/integrations-tracing/litellm), and more). For details on auto-instrumentation, check out the [OpenInference](https://github.com/Arize-ai/openinference) project. Phoenix runs practically anywhere, including your local machine, a Jupyter notebook, a containerized deployment, or in the cloud. ## Installation Install Phoenix via `pip` or `conda` ```shell pip install arize-phoenix ``` Phoenix container images are available via [Docker Hub](https://hub.docker.com/r/arizephoenix/phoenix) and can be deployed using Docker or Kubernetes. Arize AI also provides cloud instances at [app.phoenix.arize.com](https://app.phoenix.arize.com/). ## Packages The `arize-phoenix` package includes the entire Phoenix platform. However, if you have deployed the Phoenix platform, there are lightweight Python sub-packages and TypeScript packages that can be used in conjunction with the platform. ### Python Subpackages | Package | Version & Docs | Description | | --------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | | [arize-phoenix-otel](https://github.com/Arize-ai/phoenix/tree/main/packages/phoenix-otel) | [![PyPI Version](https://img.shields.io/pypi/v/arize-phoenix-otel)](https://pypi.org/project/arize-phoenix-otel/) [![Docs](https://img.shields.io/badge/docs-blue?logo=readthedocs&logoColor=white)](https://arize-phoenix.readthedocs.io/projects/otel/en/latest/index.html) | Provides a lightweight wrapper around OpenTelemetry primitives with Phoenix-aw ---

FAQ

What is the Arize Phoenix MCP server?
Arize Phoenix is a Model Context Protocol (MCP) server profile on explainx.ai. MCP lets AI hosts (e.g. Claude Desktop, Cursor) call tools and resources through a standard interface; this page summarizes categories, install hints, and community ratings.
How do MCP servers relate to agent skills?
Skills are reusable instruction packages (often SKILL.md); MCP servers expose live capabilities. Teams frequently combine both—skills for workflows, MCP for APIs and data. See explainx.ai/skills and explainx.ai/mcp-servers for parallel directories.
How are reviews shown for Arize Phoenix?
This profile displays 47 aggregated ratings (sample rows for discoverability plus signed-in user reviews). Average score is about 4.6 out of 5—verify behavior in your own environment before production use.

Discussion

Product Hunt–style comments (not star reviews)
  • No comments yet — start the thread.
MCP server reviews

Ratings

4.647 reviews
  • Omar Tandon· Dec 24, 2024

    I recommend Arize Phoenix for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.

  • Anaya Ndlovu· Dec 16, 2024

    Arize Phoenix is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.

  • Dhruvi Jain· Dec 8, 2024

    Strong directory entry: Arize Phoenix surfaces stars and publisher context so we could sanity-check maintenance before adopting.

  • Daniel Singh· Dec 8, 2024

    Arize Phoenix is a well-scoped MCP server in the explainx.ai directory — install snippets and categories matched our Claude Code setup.

  • Oshnikdeep· Nov 27, 2024

    Arize Phoenix is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.

  • Mateo Shah· Nov 27, 2024

    Useful MCP listing: Arize Phoenix is the kind of server we cite when onboarding engineers to host + tool permissions.

  • Diego Rahman· Nov 15, 2024

    We evaluated Arize Phoenix against two servers with overlapping tools; this profile had the clearer scope statement.

  • Anaya Thompson· Nov 3, 2024

    Strong directory entry: Arize Phoenix surfaces stars and publisher context so we could sanity-check maintenance before adopting.

  • Diego Okafor· Oct 22, 2024

    I recommend Arize Phoenix for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.

  • Ganesh Mohane· Oct 18, 2024

    We evaluated Arize Phoenix against two servers with overlapping tools; this profile had the clearer scope statement.

showing 1-10 of 47

1 / 5