MCP (Model Context Protocol) is a model context protocol launched by Anthropic that allows AI models (such as Claude, GPT, etc.) to call external tools through standardized interfaces. With the Seedance MCP Server provided by AceData Cloud, you can directly use ByteDance Seedance to generate AI videos within AI clients like Claude Desktop, VS Code, Cursor, and more.Documentation Index
Fetch the complete documentation index at: https://docs.acedata.cloud/llms.txt
Use this file to discover all available pages before exploring further.
Feature Overview
Seedance MCP Server offers the following core features:- Text-to-Video Generation — Generate high-quality videos from text prompts
- Image-to-Video Generation — Generate videos using images as references (first frame, last frame, reference image modes)
- Multi-Model Support — Supports various models including Seedance 1.5 Pro, 1.0 Pro, 1.0 Lite, etc.
- Multiple Resolutions — Supports 480p, 720p, 1080p resolutions
- Various Aspect Ratios — Supports 16:9, 9:16, 1:1, 4:3, 3:4, 21:9, and other ratios
- Flexible Duration — Supports video lengths from 2 to 12 seconds
- Audio Generation — Some models support simultaneous audio generation
- Task Querying — Monitor generation progress and retrieve results
Prerequisites
Before use, you need to obtain an AceData Cloud API Token:- Register or log in at AceData Cloud Platform
- Navigate to the Seedance Videos API page
- Click “Acquire” to get your API Token (first-time applicants receive free quota)
Installation and Configuration
Method 1: pip Installation (Recommended)
Method 2: Source Installation
mcp-seedance command.
Using in Claude Desktop
Edit the Claude Desktop configuration file:- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
uvx (no need to pre-install packages):
Using in VS Code / Cursor
Create.vscode/mcp.json in the project root directory:
uvx:
Available Tools List
| Tool Name | Description |
|---|---|
seedance_generate_video | Generate video from text prompt |
seedance_generate_video_from_image | Generate video using image as reference |
seedance_get_task | Query single task status |
seedance_get_tasks_batch | Batch query task statuses |
seedance_list_models | List all available models and their capabilities |
seedance_list_resolutions | List available resolutions and aspect ratios |
seedance_list_actions | List all available tools and workflow examples |
Usage Examples
After configuration, you can directly invoke these features in AI clients using natural language, for example:- “Help me generate a city street time-lapse video using Seedance”
- “Use this photo as the first frame to generate an 8-second video”
- “Generate a 1080p vertical 9:16 short video”
- “Generate a video with audio using the Seedance 1.5 Pro model”
More Information
- GitHub Repository: AceDataCloud/MCPSeedance
- PyPI Package: mcp-seedance
- API Documentation: Seedance Video Generation API

