text▌
18 indexed skills · max 10 per page
regex-vs-llm-structured-text
affaan-m/everything-claude-code · AI/ML
Hybrid regex-and-LLM framework for parsing structured text, optimizing cost by handling 95–98% with regex and reserving LLM calls for edge cases. \n \n Combines regex extraction with confidence scoring to flag low-confidence items, then validates only those items with an LLM, reducing LLM calls by ~95% versus all-LLM approaches \n Includes production-ready Python patterns for regex parsing, confidence scoring, and hybrid pipeline orchestration with real metrics from a 410-item quiz parsing examp
text-to-speech
elevenlabs/skills · Productivity
Natural speech synthesis from text across 70+ languages with multiple quality and latency models. \n \n Six models available ranging from highest-quality eleven_v3 to ultra-low-latency eleven_flash_v2_5 (~75ms), with language and speed tradeoffs documented \n Supports 13+ output formats including MP3, PCM, WAV, Opus, and telephony codecs (μ-law, A-law) for web, streaming, and real-time applications \n Fine-tune voice characteristics via stability, similarity boost, style, speaker boost, and spee
alicloud-ai-text-document-mind-test
cinience/alicloud-skills · Cloud
Category: test
text-to-speech
inferen-sh/skills · Productivity
Multiple text-to-speech models via inference.sh CLI for voiceovers, podcasts, and accessibility. \n \n Six models available: ElevenLabs (premium, 22+ voices, 32 languages), DIA TTS (conversational), Kokoro TTS (fast), Chatterbox, Higgs Audio (emotional control), and VibeVoice (long-form podcasts) \n Core capabilities include basic speech synthesis, expressive speech with emotion control, and conversational dialogue generation \n Easily combine with video tools like OmniHuman to create talking he
text-to-speech
inference-sh/skills · Productivity
Convert text to natural speech via inference.sh CLI.
speech-to-text
elevenlabs/skills · Productivity
Transcribe audio and video to text with speaker identification, word-level timestamps, and 90+ language support. \n \n Two models available: scribe_v2 for batch transcription with high accuracy, and scribe_v2_realtime for live transcription with ~150ms latency \n Speaker diarization labels each word with speaker ID; keyterm prompting helps recognize domain-specific vocabulary and proper nouns \n Word-level timestamps include type classification (word, spacing, audio event) for precise timing and
paddleocr-text-recognition
aidenwu0209/paddleocr-skills · Productivity
Extract text from images, PDFs, and documents via PaddleOCR API with structured JSON output. \n \n Supports URLs and local file paths for images and PDFs; returns complete recognized text in JSON format \n Mandatory API-only approach: executes python scripts/ocr_caller.py with --file-url or --file-path parameters \n Requires initial configuration with PADDLEOCR_OCR_API_URL and PADDLEOCR_ACCESS_TOKEN ; displays full extracted text without truncation or summarization \n Handles authentication, rat
speech-to-text
inferen-sh/skills · Productivity
Transcribe audio to text using ElevenLabs Scribe or Whisper models via inference.sh CLI. \n \n Three model options: ElevenLabs Scribe v2 (98%+ accuracy with diarization), Fast Whisper V3, and Whisper V3 Large for varying speed/accuracy tradeoffs \n Supports 99+ languages, optional timestamps, speaker diarization, and translation to English \n Common workflows include meeting transcription, podcast transcripts, video subtitles, and voice note conversion \n Requires inference.sh CLI ( infsh ) inst