MCP (Model Context Protocol) is a model context protocol launched by Anthropic that allows AI models (such as Claude, GPT, etc.) to call external tools through standardized interfaces. With the Midjourney MCP Server provided by AceData Cloud, you can directly generate and edit AI images in AI clients like Claude Desktop, VS Code, Cursor, etc.Documentation Index
Fetch the complete documentation index at: https://docs.acedata.cloud/llms.txt
Use this file to discover all available pages before exploring further.
Feature Overview
The Midjourney MCP Server provides the following core functionalities:- Image Generation (Imagine) — Generate high-quality images from text prompts
- Image Editing — Make local modifications to generated images
- Image Transformation — Zoom in, zoom out, and pan existing images
- Image Blending (Blend) — Merge multiple images into a new image
- Reference Image Generation — Guide generation using reference images
- Image Description (Describe) — Generate text descriptions based on images
- Prompt Translation — Translate Chinese prompts into English
- Seed Retrieval — Obtain the seed value of an image for reproduction
- Video Generation — Generate dynamic videos based on images
- Task Query — Monitor generation progress and obtain results
Prerequisites
Before use, you need to obtain an AceData Cloud API Token:- Register or log in to the AceData Cloud platform
- Go to the Midjourney Imagine API page
- Click “Acquire” to get the API Token (first-time applicants receive free credits)
Installation Configuration
Method 1: pip Installation (Recommended)
Method 2: Source Installation
mcp-midjourney command to start the service.
Using in Claude Desktop
Edit the Claude Desktop configuration file:- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
uvx (no need to install packages in advance):
Using in VS Code / Cursor
Create a.vscode/mcp.json in the project root directory:
uvx:
Available Tools List
| Tool Name | Description |
|---|---|
midjourney_imagine | Generate images from text prompts |
midjourney_edit | Edit local areas of existing images |
midjourney_transform | Zoom in, zoom out, and pan existing images |
midjourney_blend | Merge multiple images into one |
midjourney_with_reference | Generate images using reference images |
midjourney_describe | Generate text descriptions based on images |
midjourney_translate | Translate prompts into English |
midjourney_get_seed | Retrieve the seed value of an image |
midjourney_generate_video | Generate videos based on images |
midjourney_extend_video | Extend existing videos |
midjourney_get_task | Query the status of a single task |
midjourney_get_tasks_batch | Batch query task statuses |
Usage Examples
After configuration, you can directly call these functions in the AI client using natural language, for example:- “Help me generate a cyberpunk-style city night scene”
- “Change the background of this image to the seaside”
- “Blend these four images into one”
- “Describe the content of this image”
- “Make a video from this image”
- “Zoom in on the second variant of this image”
More Information
- GitHub Repository: AceDataCloud/MCPMidjourney
- PyPI Package: mcp-midjourney
- API Documentation: Midjourney Generation API

