otherai-ml

OpenAI TTS

by nakamurau1

Transform text to speech with OpenAI TTS. Free AI voice generator for seamless, customizable speech to voice and high-qu

Enables high-quality voice generation from text using OpenAI's TTS API with customizable voices, formats, and speech parameters for seamless audio playback during conversations.

github stars

1

Works with Claude DesktopNo installation required with npxMultiple voice characters available

best for

  • / Adding voice narration to Claude Desktop conversations
  • / Developers building voice-enabled applications
  • / Content creators needing high-quality text-to-speech
  • / Command-line users wanting quick audio generation

capabilities

  • / Generate speech from text using OpenAI's TTS models
  • / Choose from multiple voice characters (alloy, nova, echo, etc.)
  • / Output audio in various formats (MP3, WAV, OPUS, AAC)
  • / Customize speech speed and voice parameters
  • / Run as CLI tool for direct text-to-speech conversion
  • / Integrate with Claude Desktop via MCP protocol

what it does

Converts text to speech using OpenAI's TTS API with multiple voice options and audio formats. Can be used as an MCP server with Claude Desktop or as a standalone CLI tool.

about

OpenAI TTS is a community-built MCP server published by nakamurau1 that provides AI assistants with tools and capabilities via the Model Context Protocol. Transform text to speech with OpenAI TTS. Free AI voice generator for seamless, customizable speech to voice and high-qu It is categorized under other, ai ml.

how to install

You can install OpenAI TTS in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

license

MIT

OpenAI TTS is released under the MIT license. This is a permissive open-source license, meaning you can freely use, modify, and distribute the software.

readme

tts-mcp

A Model Context Protocol (MCP) server and command-line tool for high-quality text-to-speech generation using the OpenAI TTS API.

Main Features

  • MCP Server: Integrate text-to-speech capabilities with Claude Desktop and other MCP-compatible clients
  • Voice Options: Support for multiple voice characters (alloy, nova, echo, etc.)
  • High-Quality Audio: Support for various output formats (MP3, WAV, OPUS, AAC)
  • Customizable: Configure speech speed, voice character, and additional instructions
  • CLI Tool: Also available as a command-line utility for direct text-to-speech conversion

Installation

Method 1: Install from Repository

# Clone the repository
git clone https://github.com/nakamurau1/tts-mcp.git
cd tts-mcp

# Install dependencies
npm install

# Optional: Install globally
npm install -g .

Method 2: Run Directly with npx (No Installation Required)

# Start the MCP server directly
npx tts-mcp tts-mcp-server --voice nova --model tts-1-hd

# Use the CLI tool directly
npx tts-mcp -t "Hello, world" -o hello.mp3

MCP Server Usage

The MCP server allows you to integrate text-to-speech functionality with Model Context Protocol (MCP) compatible clients like Claude Desktop.

Starting the MCP Server

# Start with default settings
npm run server

# Start with custom settings
npm run server -- --voice nova --model tts-1-hd

# Or directly with API key
node bin/tts-mcp-server.js --voice echo --api-key your-openai-api-key

MCP Server Options

Options:
  -V, --version       Display version information
  -m, --model <model> TTS model to use (default: "gpt-4o-mini-tts")
  -v, --voice <voice> Voice character (default: "alloy")
  -f, --format <format> Audio format (default: "mp3")
  --api-key <key>     OpenAI API key (can also be set via environment variable)
  -h, --help          Display help information

Integrating with MCP Clients

The MCP server can be used with Claude Desktop and other MCP-compatible clients. For Claude Desktop integration:

  1. Open the Claude Desktop configuration file (typically at ~/Library/Application Support/Claude/claude_desktop_config.json)
  2. Add the following configuration, including your OpenAI API key:
{
  "mcpServers": {
    "tts-mcp": {
      "command": "node",
      "args": ["full/path/to/bin/tts-mcp-server.js", "--voice", "nova", "--api-key", "your-openai-api-key"],
      "env": {
        "OPENAI_API_KEY": "your-openai-api-key"
      }
    }
  }
}

Alternatively, you can use npx for easier setup:

{
  "mcpServers": {
    "tts-mcp": {
      "command": "npx",
      "args": ["-p", "tts-mcp", "tts-mcp-server", "--voice", "nova", "--model", "gpt-4o-mini-tts"],
      "env": {
        "OPENAI_API_KEY": "your-openai-api-key"
      }
    }
  }
}

You can provide the API key in two ways:

  1. Direct method (recommended for testing): Include it in the args array using the --api-key parameter
  2. Environment variable method (more secure): Set it in the env object as shown above

Security Note: Make sure to secure your configuration file when including API keys.

  1. Restart Claude Desktop
  2. When you ask Claude to "read this text aloud" or similar requests, the text will be converted to speech

Available MCP Tools

  • text-to-speech: Tool for converting text to speech and playing it

CLI Tool Usage

You can also use tts-mcp as a standalone command-line tool:

# Convert text directly
tts-mcp -t "Hello, world" -o hello.mp3

# Convert from a text file
tts-mcp -f speech.txt -o speech.mp3

# Specify custom voice
tts-mcp -t "Welcome to the future" -o welcome.mp3 -v nova

CLI Tool Options

Options:
  -V, --version           Display version information
  -t, --text <text>       Text to convert
  -f, --file <path>       Path to input text file
  -o, --output <path>     Path to output audio file (required)
  -m, --model <n>         Model to use (default: "gpt-4o-mini-tts")
  -v, --voice <n>         Voice character (default: "alloy")
  -s, --speed <number>    Speech speed (0.25-4.0) (default: 1)
  --format <format>       Output format (default: "mp3")
  -i, --instructions <text> Additional instructions for speech generation
  --api-key <key>         OpenAI API key (can also be set via environment variable)
  -h, --help              Display help information

Supported Voices

The following voice characters are supported:

  • alloy (default)
  • ash
  • coral
  • echo
  • fable
  • onyx
  • nova
  • sage
  • shimmer

Supported Models

  • tts-1
  • tts-1-hd
  • gpt-4o-mini-tts (default)

Output Formats

The following output formats are supported:

  • mp3 (default)
  • opus
  • aac
  • flac
  • wav
  • pcm

Environment Variables

You can also configure the tool using system environment variables:

OPENAI_API_KEY=your-api-key-here

License

MIT

FAQ

What is the OpenAI TTS MCP server?
OpenAI TTS is a Model Context Protocol (MCP) server profile on explainx.ai. MCP lets AI hosts (e.g. Claude Desktop, Cursor) call tools and resources through a standard interface; this page summarizes categories, install hints, and community ratings.
How do MCP servers relate to agent skills?
Skills are reusable instruction packages (often SKILL.md); MCP servers expose live capabilities. Teams frequently combine both—skills for workflows, MCP for APIs and data. See explainx.ai/skills and explainx.ai/mcp-servers for parallel directories.
How are reviews shown for OpenAI TTS?
This profile displays 27 aggregated ratings (sample rows for discoverability plus signed-in user reviews). Average score is about 4.5 out of 5—verify behavior in your own environment before production use.
MCP server reviews

Ratings

4.527 reviews
  • Dev Rahman· Dec 24, 2024

    OpenAI TTS is a well-scoped MCP server in the explainx.ai directory — install snippets and categories matched our Claude Code setup.

  • Ganesh Mohane· Dec 8, 2024

    Useful MCP listing: OpenAI TTS is the kind of server we cite when onboarding engineers to host + tool permissions.

  • Mia Sethi· Nov 15, 2024

    We wired OpenAI TTS into a staging workspace; the listing’s GitHub and npm pointers saved time versus hunting across READMEs.

  • Kofi Garcia· Nov 3, 2024

    According to our notes, OpenAI TTS benefits from clear Model Context Protocol framing — fewer ambiguous “AI plugin” claims.

  • Lucas Desai· Oct 22, 2024

    We wired OpenAI TTS into a staging workspace; the listing’s GitHub and npm pointers saved time versus hunting across READMEs.

  • Mia Taylor· Oct 6, 2024

    According to our notes, OpenAI TTS benefits from clear Model Context Protocol framing — fewer ambiguous “AI plugin” claims.

  • Rahul Santra· Sep 1, 2024

    OpenAI TTS reduced integration guesswork — categories and install configs on the listing matched the upstream repo.

  • Soo Martin· Sep 1, 2024

    Strong directory entry: OpenAI TTS surfaces stars and publisher context so we could sanity-check maintenance before adopting.

  • Pratham Ware· Aug 20, 2024

    I recommend OpenAI TTS for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.

  • Sophia Jackson· Aug 20, 2024

    OpenAI TTS has been reliable for tool-calling workflows; the MCP profile page is a good permalink for internal docs.

showing 1-10 of 27

1 / 3