MCP (Model Context Protocol) is a model context protocol launched by Anthropic that allows AI models (such as Claude, GPT, etc.) to call external tools through standardized interfaces. With the Luma MCP Server provided by AceData Cloud, you can directly generate AI videos in AI clients like Claude Desktop, VS Code, Cursor, etc.Documentation Index
Fetch the complete documentation index at: https://docs.acedata.cloud/llms.txt
Use this file to discover all available pages before exploring further.
Feature Overview
Luma MCP Server provides the following core functionalities:- Text to Video Generation — Generate high-quality videos from text prompts
- Image to Video Generation — Generate videos starting or ending with images
- Video Continuation — Continue generating from the last frame of an existing video
- Multiple Aspect Ratios — Supports various ratios such as 16:9, 9:16, 1:1, etc.
- Visual Enhancement — Optional visual quality enhancement feature
- Task Querying — Monitor generation progress and obtain results
Prerequisites
Before use, you need to obtain an AceData Cloud API Token:- Register or log in to the AceData Cloud platform
- Go to the Luma Videos API page
- Click “Acquire” to get the API Token (first-time applicants receive free credits)
Installation Configuration
Method 1: pip Installation (Recommended)
Method 2: Source Installation
mcp-luma command to start the service.
Using in Claude Desktop
Edit the Claude Desktop configuration file:- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
uvx (no need to install the package in advance):
Using in VS Code / Cursor
Create.vscode/mcp.json in the project root directory:
uvx:
Available Tools List
| Tool Name | Description |
|---|---|
luma_generate_video | Generate video from text prompts |
luma_generate_video_from_image | Generate video from an image |
luma_extend_video | Continue an existing video |
luma_extend_video_from_url | Continue from a video specified by URL |
luma_get_task | Query the status of a single task |
luma_get_tasks_batch | Batch query task statuses |
Usage Examples
After configuration, you can directly call these functions in the AI client using natural language, for example:- “Help me generate a video of a sunset by the sea”
- “Use this photo as the first frame to generate a 5-second video”
- “Continue this video and extend it further”
- “Generate a vertical video with a 9:16 aspect ratio”
More Information
- GitHub Repository: AceDataCloud/MCPLuma
- PyPI Package: mcp-luma
- API Documentation: Luma Video Generation API

