avatar-video▌
heygen-com/skills · updated Apr 8, 2026
Create AI avatar videos with full control over avatars, voices, scripts, and backgrounds using POST /v3/videos. Two creation modes via discriminated union on type:
Avatar Video
Create AI avatar videos with full control over avatars, voices, scripts, and backgrounds using POST /v3/videos. Two creation modes via discriminated union on type:
"type": "avatar"+avatar_id— use a HeyGen avatar from the library"type": "image"+image(AssetInput) — animate any photo via Avatar IV
Authentication
All requests require the X-Api-Key header. Set the HEYGEN_API_KEY environment variable.
curl -X GET "https://api.heygen.com/v3/avatars" \
-H "X-Api-Key: $HEYGEN_API_KEY"
Tool Selection
If HeyGen MCP tools are available (mcp__heygen__*), prefer them over direct HTTP API calls — they handle authentication and request formatting automatically.
| Task | MCP Tool | Fallback (Direct API) |
|---|---|---|
| Check video status / get URL | mcp__heygen__get_video |
GET /v3/videos/{video_id} |
| List account videos | mcp__heygen__list_videos |
GET /v3/videos |
| Delete a video | mcp__heygen__delete_video |
DELETE /v3/videos/{video_id} |
Video generation (POST /v3/videos) and avatar/voice listing are done via direct API calls — see reference files below.
Default Workflow
- List avatar looks —
GET /v3/avatars/looks→ pick a look, note itsid(this is theavatar_id) anddefault_voice_id. See avatars.md - List voices (if needed) —
GET /v3/voices→ pick a voice matching the avatar's gender/language. See voices.md - Write the script — Structure scenes with one concept each. See scripts.md
- Generate the video —
POST /v3/videoswithavatar_id,voice_id,script, and optionalbackgroundper scene. See video-generation.md - Poll for completion —
GET /v3/videos/{video_id}until status iscompleted. See video-status.md
Routing: This Skill vs Create Video
This skill = precise control (specific avatar, exact script, custom background). create-video = prompt-based ("make me a video about X", AI handles the rest).
Reference Files
Read these as needed — they contain endpoint details, request/response schemas, and code examples (curl, TypeScript, Python).
Core workflow:
- references/video-generation.md —
POST /v3/videosrequest fields, avatar input modes, voice settings, backgrounds - references/avatars.md —
GET /v3/avatars(groups) andGET /v3/avatars/looks(looks →avatar_id) - references/voices.md —
GET /v3/voiceswith filtering by language, gender, engine - references/video-status.md —
GET /v3/videos/{id}polling patterns and download
Customization:
- references/scripts.md — Script writing, SSML break tags, pacing
- references/backgrounds.md — Solid color and image backgrounds
- references/captions.md — Auto-generated captions/subtitles
- references/text-overlays.md — Text overlays with fonts and positioning
Advanced:
- references/photo-avatars.md — Animate photos via
type: "image"(Avatar IV), AI-generated avatars - references/templates.md — Template listing and variable replacement
- references/remotion-integration.md — Using HeyGen avatars in Remotion compositions
- references/webhooks.md — Webhook endpoints and events
- references/assets.md — Uploading images, videos, audio
- references/dimensions.md — Resolution and aspect ratios
- references/quota.md — Credit system and usage limits
Best Practices
- Preview avatars before generating — Use
GET /v3/avatars/looksand downloadpreview_image_urlso the user can see the avatar before committing - Use avatar's default voice — Most avatars have a
default_voice_idpre-matched for natural results - Fallback: match gender manually — If no default voice, ensure avatar and voice genders match
- Use test mode for development — Set
test: trueto avoid consuming credits (output will be watermarked) - Set generous timeouts — Video generation often takes 5-15 minutes, sometimes longer
- Validate inputs — Check avatar and voice IDs exist before generating
Discussion
Product Hunt–style comments (not star reviews)- No comments yet — start the thread.
Ratings
4.6★★★★★53 reviews- ★★★★★Hiroshi Okafor· Dec 28, 2024
We added avatar-video from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.
- ★★★★★Arya Torres· Dec 24, 2024
Useful defaults in avatar-video — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
- ★★★★★Arya Menon· Dec 16, 2024
avatar-video reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Yusuf Johnson· Dec 12, 2024
Solid pick for teams standardizing on skills: avatar-video is focused, and the summary matches what you get after install.
- ★★★★★Mei Agarwal· Dec 4, 2024
Registry listing for avatar-video matched our evaluation — installs cleanly and behaves as described in the markdown.
- ★★★★★Sakura Khanna· Nov 19, 2024
Keeps context tight: avatar-video is the kind of skill you can hand to a new teammate without a long onboarding doc.
- ★★★★★Sakshi Patil· Nov 15, 2024
Registry listing for avatar-video matched our evaluation — installs cleanly and behaves as described in the markdown.
- ★★★★★Arya Mehta· Nov 15, 2024
avatar-video has been reliable in day-to-day use. Documentation quality is above average for community skills.
- ★★★★★Amina Harris· Nov 3, 2024
I recommend avatar-video for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.
- ★★★★★Amina Bhatia· Oct 22, 2024
Useful defaults in avatar-video — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
showing 1-10 of 53