ai-mlanalytics-data

Key-Value Extractor

by kunihiros

Key-Value Extractor quickly pulls structured key-value pairs from messy text like emails or receipts using advanced dete

Extracts structured key-value pairs from unstructured text using a multi-step pipeline that performs language detection, entity recognition, and type validation for applications requiring information extraction from messy sources like emails or receipts.

github stars

1

Discovers keys automatically without pre-definitionMulti-language NER preprocessingMulti-step validation pipeline

best for

  • / Processing receipts and invoices for accounting
  • / Extracting data from customer emails or support tickets
  • / Parsing documents with unknown or variable structure
  • / Multi-language document processing workflows

capabilities

  • / Extract key-value pairs from unstructured text
  • / Detect entities in Japanese, English, and Chinese
  • / Validate and normalize extracted data types
  • / Export results in JSON, YAML, or TOML formats
  • / Process noisy or arbitrary text inputs
  • / Perform automatic language detection

what it does

Extracts key-value pairs from messy, unstructured text like emails or receipts using AI. Automatically discovers relevant data without needing to specify what to look for in advance.

about

Key-Value Extractor is a community-built MCP server published by kunihiros that provides AI assistants with tools and capabilities via the Model Context Protocol. Key-Value Extractor quickly pulls structured key-value pairs from messy text like emails or receipts using advanced dete It is categorized under ai ml, analytics data.

how to install

You can install Key-Value Extractor in your AI client of choice. Use the install panel on this page to get one-click setup for Cursor, Claude Desktop, VS Code, and other MCP-compatible clients. This server runs locally on your machine via the stdio transport.

license

GPL-3.0

Key-Value Extractor is released under the GPL-3.0 license.

readme

Flexible Key-Value Extracting MCP Server

smithery badge

Version: 0.3.2

This MCP server extracts key-value pairs from arbitrary, noisy, or unstructured text using LLMs (GPT-4.1-mini) and pydantic-ai. It ensures type safety and supports multiple output formats (JSON, YAML, TOML). The server is robust to any input and always attempts to structure data as much as possible, however, perfect extraction is not guaranteed.


🤔💡 Why Use This MCP Server?

While many Large Language Model (LLMs) services offer structured output capabilities, this MCP server provides distinct advantages for key-value extraction, especially from challenging real-world text:

  • 🔑🔍 Automatic Key Discovery: A core strength is its ability to autonomously identify and extract relevant key-value pairs from unstructured text without requiring pre-defined keys. While typical LLM structured outputs need you to specify the keys you're looking for, this server discovers them, making it highly effective for diverse and unpredictable data where the structure is not known beforehand.
  • 💪🧱 Superior Robustness for Complex Inputs: It excels with arbitrary, noisy, or unstructured text where standard LLM structured outputs might falter. The multi-step pipeline is specifically designed to sift through and make sense of imperfect data.
  • 🌐🗣️ Advanced Multi-Lingual Preprocessing: Before LLM processing, it leverages spaCy for Named Entity Recognition (NER) in Japanese, English, and Chinese (Simplified/Traditional), significantly enhancing extraction accuracy for these languages by providing context-rich candidate phrases.
  • 🔄✍️ Iterative Refinement and Typing: Unlike a single-pass extraction, this server employs a sophisticated pipeline including LLM-based type annotation, LLM-based type evaluation, and rule-based/LLM-fallback normalization. This ensures more accurate and contextually appropriate data types.
  • ✅🛡️ Guaranteed Type Safety and Schema Adherence: Final structuring with Pydantic ensures that the output is not only structured but also type-safe and validated against a defined schema, providing reliable data for downstream applications.
  • 📊⚙️ Consistent and Predictable Output: The server is designed to always return a well-formed response, even if extraction is partial or encounters issues, which is critical for building robust automated systems.

Release Notes

v0.3.2

  • Fix: FastMCP caused error.

v0.3.1

  • Update: Improve type evaluation prompt for robust correction.
  • Update: Added the strong point of this MCP server on README.md

v0.2.0

  • Fix: Lang code for zh-cn / zh-tw.

v0.1.0

  • Initial release

Tools

  • /extract_json : Extracts type-safe key-value pairs in JSON format from input text.
  • /extract_yaml : Extracts type-safe key-value pairs in YAML format from input text.
  • /extract_toml : Extracts type-safe key-value pairs in TOML format from input text.
    • Note: Due to TOML specifications, arrays of objects (dicts) or deeply nested structures cannot be directly represented. See "Note on TOML Output Limitations" below for details.

Note:

  • Supported languages: Japanese, English, and Chinese (Simplified: zh-cn / Traditional: zh-tw).
  • Extraction relies on pydantic-ai and LLMs. Perfect extraction is not guaranteed.
  • Longer input sentences will take more time to process. Please be patient.
  • On first launch, the server will download spaCy models, so the process will take longer initially.

Estimated Processing Time Sample

Input TokensInput Characters (approx.)Measured Processing Time (sec)Model Configuration
200~400~15gpt-4.1-mini

Actual processing time may vary significantly depending on API response, network conditions, and model load. Even short texts may take 15 seconds or more.

Features

  • Flexible extraction: Handles any input, including noisy or broken data.
  • JP / EN / ZH-CN / ZH-TW full support: Preprocessing with spaCy NER by automatic language detection (Japanese, English, Chinese [Simplified: zh-cn / Traditional: zh-tw] supported; others are rejected with error).
  • Type-safe output: Uses Pydantic for output validation.
  • Multiple formats: Returns results as JSON, YAML, or TOML.
  • Robust error handling: Always returns a well-formed response, even on failure.
  • High accuracy: Uses GPT-4.1-mini for both extraction/annotation and type evaluation, with Pydantic for final structuring.

Tested Scenarios

The server has been tested with various inputs, including:

  • Simple key-value pairs
  • Noisy or unstructured text with important information buried within
  • Different data formats (JSON, YAML, TOML) for output

Processing Flow

Below is a flowchart representing the processing flow of the key-value extraction pipeline as implemented in server.py:

flowchart TD
    A[Input Text] --> B[Step 0: Preprocessing with spaCy Lang Detect then NER]
    B --> C[Step 1: Key-Value Extraction - LLM]
    C --> D[Step 2: Type Annotation - LLM]
    D --> E[Step 3: Type Evaluation - LLM]
    E --> F[Step 4: Type Normalization - Static Rules + LLM]
    F --> G[Step 5: Final Structuring with Pydantic]
    G --> H[Output in JSON/YAML/TOML]

Preprocessing with spaCy (Multilingual NER)

This server uses spaCy with automatic language detection to extract named entities from the input text before passing it to the LLM. Supported languages are Japanese (ja_core_news_md), English (en_core_web_sm), and Chinese (Simplified/Traditional, zh_core_web_sm).

  • The language of the input text is automatically detected using langdetect.

  • If the detected language is not Japanese, English, or Chinese, the server returns an error: Unsupported lang detected.

  • The appropriate spaCy model is automatically downloaded and loaded as needed. No manual installation is required.

  • The extracted phrase list is included in the LLM prompt as follows:

    [Preprocessing Candidate Phrases (spaCy NER)] The following is a list of phrases automatically extracted from the input text using spaCy's detected language model. These phrases represent detected entities such as names, dates, organizations, locations, numbers, etc. This list is for reference only and may contain irrelevant or incorrect items. The LLM uses its own judgment and considers the entire input text to flexibly infer the most appropriate key-value pairs.

Step Details

This project's key-value extraction pipeline consists of multiple steps. Each step's details are as follows:

Step 0: Preprocessing with spaCy (Language Detection → Named Entity Recognition)

  • Purpose: Automatically detect the language of the input text and use the appropriate spaCy model (e.g., ja_core_news_md, en_core_web_sm, zh_core_web_sm) to extract named entities.
  • Output: The extracted phrase list, which is included in the LLM prompt as a hint to improve key-value pair extraction accuracy.

Step 1: Key-Value Extraction (LLM)

  • Purpose: Use GPT-4.1-mini to extract key-value pairs from the input text and the extracted phrase list.
  • Details:
    • The prompt includes instructions to return list-formatted values when the same key appears multiple times.
    • Few-shot examples are designed to include list-formatted outputs.
  • Output: Example: key: person, value: ["Tanaka", "Sato"]

Step 2: Type Annotation (LLM)

  • Purpose: Use GPT-4.1-mini to infer the data type (int, str, bool, list, etc.) of each key-value pair extracted in Step 1.
  • Details:
    • The type annotation prompt includes instructions for list and multiple value support.
  • Output: Example: key: person, value: ["Tanaka", "Sato"] -> list[str]

Step 3: Type Evaluation (LLM)

  • Purpose: Use GPT-4.1-mini to evaluate and correct the type annotations from Step 2.
  • Details:
    • For each key-value pair, GPT-4.1-mini re-evaluates the type annotation's validity and context.
    • If type errors or ambiguities are detected, GPT-4.1-mini automatically corrects or supplements the type.
    • Example: Correcting a value extracted as a number but should be a string, or determining whether a value is a list or a single value.
  • Output: The type-evaluated key-value pair list.

Step 4: Type Normalization (Static Rules + LLM Fallback)

  • Purpose: Convert the type-evaluated data into Python's standard types (int, float, bool, str, list, None, etc.).
  • Details:
    • Apply static normalization rules (regular expressions or type conversion functions) to convert values into Python's standard types.
    • Example: Converting comma-separated values to lists, "true"/"false" to bool, or date expressions to standard formats.
    • If static rules cannot convert a value, use LLM-based type conversion fallback.
    • Unconvertible values are safely handled as None or str.
  • Output: The Python-type-normalized key-value pair list.

Step 5: Final Structuring with Pydantic

  • Purpose: Validate and structure the type-normalized data using Pydantic models (KVOut/KVPayload).
  • Details:
    • Map each key-value pair to Pydantic models, ensuring type safety and data integrity.
    • Validate single values, lists, null, and composite types according to the schema.
    • If validation fails, attach error information while preserving as much data as possible.
    • The final output is returned in the specified format (JSON, YAML, or TOML).
  • Output: The type-safe and validated dict or specified format (JSON/YAML/TOML) output.

This pipeline is designed to accommodate future list format support and Pydantic schema extensions.

Note on TOML Ou


FAQ

What is the Key-Value Extractor MCP server?
Key-Value Extractor is a Model Context Protocol (MCP) server profile on explainx.ai. MCP lets AI hosts (e.g. Claude Desktop, Cursor) call tools and resources through a standard interface; this page summarizes categories, install hints, and community ratings.
How do MCP servers relate to agent skills?
Skills are reusable instruction packages (often SKILL.md); MCP servers expose live capabilities. Teams frequently combine both—skills for workflows, MCP for APIs and data. See explainx.ai/skills and explainx.ai/mcp-servers for parallel directories.
How are reviews shown for Key-Value Extractor?
This profile displays 10 aggregated ratings (sample rows for discoverability plus signed-in user reviews). Average score is about 4.5 out of 5—verify behavior in your own environment before production use.
MCP server reviews

Ratings

4.510 reviews
  • Shikha Mishra· Oct 10, 2024

    Key-Value Extractor is among the better-indexed MCP projects we tried; the explainx.ai summary tracks the official description.

  • Piyush G· Sep 9, 2024

    We evaluated Key-Value Extractor against two servers with overlapping tools; this profile had the clearer scope statement.

  • Chaitanya Patil· Aug 8, 2024

    Useful MCP listing: Key-Value Extractor is the kind of server we cite when onboarding engineers to host + tool permissions.

  • Sakshi Patil· Jul 7, 2024

    Key-Value Extractor reduced integration guesswork — categories and install configs on the listing matched the upstream repo.

  • Ganesh Mohane· Jun 6, 2024

    I recommend Key-Value Extractor for teams standardizing on MCP; the explainx.ai page compares cleanly with sibling servers.

  • Oshnikdeep· May 5, 2024

    Strong directory entry: Key-Value Extractor surfaces stars and publisher context so we could sanity-check maintenance before adopting.

  • Dhruvi Jain· Apr 4, 2024

    Key-Value Extractor has been reliable for tool-calling workflows; the MCP profile page is a good permalink for internal docs.

  • Rahul Santra· Mar 3, 2024

    According to our notes, Key-Value Extractor benefits from clear Model Context Protocol framing — fewer ambiguous “AI plugin” claims.

  • Pratham Ware· Feb 2, 2024

    We wired Key-Value Extractor into a staging workspace; the listing’s GitHub and npm pointers saved time versus hunting across READMEs.

  • Yash Thakker· Jan 1, 2024

    Key-Value Extractor is a well-scoped MCP server in the explainx.ai directory — install snippets and categories matched our Claude Code setup.