MCP (Model Context Protocol) is a model context protocol launched by Anthropic that allows AI models (such as Claude, GPT, etc.) to call external tools through standardized interfaces. With the Sora MCP Server provided by AceData Cloud, you can directly use OpenAI Sora to generate AI videos in AI clients like Claude Desktop, VS Code, Cursor, etc.Documentation Index
Fetch the complete documentation index at: https://docs.acedata.cloud/llms.txt
Use this file to discover all available pages before exploring further.
Feature Overview
The Sora MCP Server provides the following core functionalities:- Text to Video Generation — Generate high-quality videos from text prompts
- Image to Video Generation — Generate videos based on images
- Character Consistency Video — Maintain character consistency using reference images
- Asynchronous Generation — Support for asynchronous task submission and result querying
- Multiple Screen Orientations — Support for landscape and portrait modes
- Task Querying — Monitor generation progress and obtain results
Prerequisites
Before using, you need to obtain an AceData Cloud API Token:- Register or log in to the AceData Cloud platform
- Go to the Sora Videos API page
- Click “Acquire” to get the API Token (first-time applicants receive free credits)
Installation Configuration
Method 1: pip Installation (Recommended)
Method 2: Source Installation
mcp-sora command to start the service.
Using in Claude Desktop
Edit the Claude Desktop configuration file:- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
uvx (no need to install the package in advance):
Using in VS Code / Cursor
Create a.vscode/mcp.json in the project root directory:
uvx:
Available Tools List
| Tool Name | Description |
|---|---|
sora_generate_video | Generate video from text prompts |
sora_generate_video_from_image | Generate video based on images |
sora_generate_video_with_character | Generate consistent video using reference character images |
sora_generate_video_async | Asynchronously submit video generation tasks |
sora_get_task | Query the status of a single task |
sora_get_tasks_batch | Batch query task statuses |
Usage Examples
After configuration, you can directly call these functions in the AI client using natural language, for example:- “Help me generate a video of a cat running on the grass using Sora”
- “Generate a video from this character photo, maintaining character consistency”
- “Generate a portrait video with the content of a city sunrise”
- “Asynchronously generate a video and check the results later”
More Information
- GitHub Repository: AceDataCloud/MCPSora
- PyPI Package: mcp-sora
- API Documentation: Sora Video Generation API

