text
stringlengths 0
2k
| heading1
stringlengths 4
79
| source_page_url
stringclasses 183
values | source_page_title
stringclasses 183
values |
|---|---|---|---|
sage=True
).to(device)
tokenizer = AutoTokenizer.from_pretrained(repo_id)
feature_extractor = AutoFeatureExtractor.from_pretrained(repo_id)
sampling_rate = model.audio_encoder.config.sampling_rate
frame_rate = model.audio_encoder.config.frame_rate
@spaces.GPU
def read_response(answer):
play_steps_in_s = 2.0
play_steps = int(frame_rate * play_steps_in_s)
description = "Jenny speaks at an average pace with a calm delivery in a very confined sounding environment with clear audio quality."
description_tokens = tokenizer(description, return_tensors="pt").to(device)
streamer = ParlerTTSStreamer(model, device=device, play_steps=play_steps)
prompt = tokenizer(answer, return_tensors="pt").to(device)
generation_kwargs = dict(
input_ids=description_tokens.input_ids,
prompt_input_ids=prompt.input_ids,
streamer=streamer,
do_sample=True,
temperature=1.0,
min_new_tokens=10,
)
set_seed(42)
thread = Thread(target=model.generate, kwargs=generation_kwargs)
thread.start()
for new_audio in streamer:
print(f"Sample of length: {round(new_audio.shape[0] / sampling_rate, 2)} seconds")
yield answer, numpy_to_mp3(new_audio, sampling_rate=sampling_rate)
```
|
The Logic
|
https://gradio.app/guides/streaming-ai-generated-audio
|
Streaming - Streaming Ai Generated Audio Guide
|
You can see our final application [here](https://huggingface.co/spaces/gradio/magic-8-ball)!
|
Conclusion
|
https://gradio.app/guides/streaming-ai-generated-audio
|
Streaming - Streaming Ai Generated Audio Guide
|
```python
from sqlalchemy import create_engine
import pandas as pd
engine = create_engine('sqlite:///your_database.db')
with gr.Blocks() as demo:
gr.LinePlot(pd.read_sql_query("SELECT time, price from flight_info;", engine), x="time", y="price")
```
Let's see a a more interactive plot involving filters that modify your SQL query:
```python
from sqlalchemy import create_engine
import pandas as pd
engine = create_engine('sqlite:///your_database.db')
with gr.Blocks() as demo:
origin = gr.Dropdown(["DFW", "DAL", "HOU"], value="DFW", label="Origin")
gr.LinePlot(lambda origin: pd.read_sql_query(f"SELECT time, price from flight_info WHERE origin = {origin};", engine), inputs=origin, x="time", y="price")
```
|
SQLite
|
https://gradio.app/guides/connecting-to-a-database
|
Data Science And Plots - Connecting To A Database Guide
|
If you're using a different database format, all you have to do is swap out the engine, e.g.
```python
engine = create_engine('postgresql://username:password@host:port/database_name')
```
```python
engine = create_engine('mysql://username:password@host:port/database_name')
```
```python
engine = create_engine('oracle://username:password@host:port/database_name')
```
|
Postgres, mySQL, and other databases
|
https://gradio.app/guides/connecting-to-a-database
|
Data Science And Plots - Connecting To A Database Guide
|
Use any of the standard Gradio form components to filter your data. You can do this via event listeners or function-as-value syntax. Let's look at the event listener approach first:
$code_plot_guide_filters_events
$demo_plot_guide_filters_events
And this would be the function-as-value approach for the same demo.
$code_plot_guide_filters
|
Filters
|
https://gradio.app/guides/filters-tables-and-stats
|
Data Science And Plots - Filters Tables And Stats Guide
|
Add `gr.DataFrame` and `gr.Label` to your dashboard for some hard numbers.
$code_plot_guide_tables_stats
$demo_plot_guide_tables_stats
|
Tables and Stats
|
https://gradio.app/guides/filters-tables-and-stats
|
Data Science And Plots - Filters Tables And Stats Guide
|
Plots accept a pandas Dataframe as their value. The plot also takes `x` and `y` which represent the names of the columns that represent the x and y axes respectively. Here's a simple example:
$code_plot_guide_line
$demo_plot_guide_line
All plots have the same API, so you could swap this out with a `gr.ScatterPlot`:
$code_plot_guide_scatter
$demo_plot_guide_scatter
The y axis column in the dataframe should have a numeric type, but the x axis column can be anything from strings, numbers, categories, or datetimes.
$code_plot_guide_scatter_nominal
$demo_plot_guide_scatter_nominal
|
Creating a Plot with a pd.Dataframe
|
https://gradio.app/guides/creating-plots
|
Data Science And Plots - Creating Plots Guide
|
You can break out your plot into series using the `color` argument.
$code_plot_guide_series_nominal
$demo_plot_guide_series_nominal
If you wish to assign series specific colors, use the `color_map` arg, e.g. `gr.ScatterPlot(..., color_map={'white': 'FF9988', 'asian': '88EEAA', 'black': '333388'})`
The color column can be numeric type as well.
$code_plot_guide_series_quantitative
$demo_plot_guide_series_quantitative
|
Breaking out Series by Color
|
https://gradio.app/guides/creating-plots
|
Data Science And Plots - Creating Plots Guide
|
You can aggregate values into groups using the `x_bin` and `y_aggregate` arguments. If your x-axis is numeric, providing an `x_bin` will create a histogram-style binning:
$code_plot_guide_aggregate_quantitative
$demo_plot_guide_aggregate_quantitative
If your x-axis is a string type instead, they will act as the category bins automatically:
$code_plot_guide_aggregate_nominal
$demo_plot_guide_aggregate_nominal
|
Aggregating Values
|
https://gradio.app/guides/creating-plots
|
Data Science And Plots - Creating Plots Guide
|
You can use the `.select` listener to select regions of a plot. Click and drag on the plot below to select part of the plot.
$code_plot_guide_selection
$demo_plot_guide_selection
You can combine this and the `.double_click` listener to create some zoom in/out effects by changing `x_lim` which sets the bounds of the x-axis:
$code_plot_guide_zoom
$demo_plot_guide_zoom
If you had multiple plots with the same x column, your event listeners could target the x limits of all other plots so that the x-axes stay in sync.
$code_plot_guide_zoom_sync
$demo_plot_guide_zoom_sync
|
Selecting Regions
|
https://gradio.app/guides/creating-plots
|
Data Science And Plots - Creating Plots Guide
|
Take a look how you can have an interactive dashboard where the plots are functions of other Components.
$code_plot_guide_interactive
$demo_plot_guide_interactive
It's that simple to filter and control the data presented in your visualization!
|
Making an Interactive Dashboard
|
https://gradio.app/guides/creating-plots
|
Data Science And Plots - Creating Plots Guide
|
Time plots need a datetime column on the x-axis. Here's a simple example with some flight data:
$code_plot_guide_temporal
$demo_plot_guide_temporal
|
Creating a Plot with a pd.Dataframe
|
https://gradio.app/guides/time-plots
|
Data Science And Plots - Time Plots Guide
|
You may wish to bin data by time buckets. Use `x_bin` to do so, using a string suffix with "s", "m", "h" or "d", such as "15m" or "1d".
$code_plot_guide_aggregate_temporal
$demo_plot_guide_aggregate_temporal
|
Aggregating by Time
|
https://gradio.app/guides/time-plots
|
Data Science And Plots - Time Plots Guide
|
You can use `gr.DateTime` to accept input datetime data. This works well with plots for defining the x-axis range for the data.
$code_plot_guide_datetime
$demo_plot_guide_datetime
Note how `gr.DateTime` can accept a full datetime string, or a shorthand using `now - [0-9]+[smhd]` format to refer to a past time.
You will often have many time plots in which case you'd like to keep the x-axes in sync. The `DateTimeRange` custom component keeps a set of datetime plots in sync, and also uses the `.select` listener of plots to allow you to zoom into plots while keeping plots in sync.
Because it is a custom component, you first need to `pip install gradio_datetimerange`. Then run the following:
$code_plot_guide_datetimerange
$demo_plot_guide_datetimerange
Try zooming around in the plots and see how DateTimeRange updates. All the plots updates their `x_lim` in sync. You also have a "Back" link in the component to allow you to quickly zoom in and out.
|
DateTime Components
|
https://gradio.app/guides/time-plots
|
Data Science And Plots - Time Plots Guide
|
In many cases, you're working with live, realtime date, not a static dataframe. In this case, you'd update the plot regularly with a `gr.Timer()`. Assuming there's a `get_data` method that gets the latest dataframe:
```python
with gr.Blocks() as demo:
timer = gr.Timer(5)
plot1 = gr.BarPlot(x="time", y="price")
plot2 = gr.BarPlot(x="time", y="price", color="origin")
timer.tick(lambda: [get_data(), get_data()], outputs=[plot1, plot2])
```
You can also use the `every` shorthand to attach a `Timer` to a component that has a function value:
```python
with gr.Blocks() as demo:
timer = gr.Timer(5)
plot1 = gr.BarPlot(get_data, x="time", y="price", every=timer)
plot2 = gr.BarPlot(get_data, x="time", y="price", color="origin", every=timer)
```
|
RealTime Data
|
https://gradio.app/guides/time-plots
|
Data Science And Plots - Time Plots Guide
|
An MCP (Model Control Protocol) server is a standardized way to expose tools so that they can be used by LLMs. A tool can provide an LLM functionality that it does not have natively, such as the ability to generate images or calculate the prime factors of a number.
|
What is an MCP Server?
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
LLMs are famously not great at counting the number of letters in a word (e.g. the number of "r"-s in "strawberry"). But what if we equip them with a tool to help? Let's start by writing a simple Gradio app that counts the number of letters in a word or phrase:
$code_letter_counter
Notice that we have: (1) included a detailed docstring for our function, and (2) set `mcp_server=True` in `.launch()`. This is all that's needed for your Gradio app to serve as an MCP server! Now, when you run this app, it will:
1. Start the regular Gradio web interface
2. Start the MCP server
3. Print the MCP server URL in the console
The MCP server will be accessible at:
```
http://your-server:port/gradio_api/mcp/
```
Gradio automatically converts the `letter_counter` function into an MCP tool that can be used by LLMs. The docstring of the function and the type hints of arguments will be used to generate the description of the tool and its parameters. The name of the function will be used as the name of your tool. Any initial values you provide to your input components (e.g. "strawberry" and "r" in the `gr.Textbox` components above) will be used as the default values if your LLM doesn't specify a value for that particular input parameter.
Now, all you need to do is add this URL endpoint to your MCP Client (e.g. Claude Desktop, Cursor, or Cline), which typically means pasting this config in the settings:
```
{
"mcpServers": {
"gradio": {
"url": "http://your-server:port/gradio_api/mcp/"
}
}
}
```
(By the way, you can find the exact config to copy-paste by going to the "View API" link in the footer of your Gradio app, and then clicking on "MCP").

|
Example: Counting Letters in a Word
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
1. **Tool Conversion**: Each API endpoint in your Gradio app is automatically converted into an MCP tool with a corresponding name, description, and input schema. To view the tools and schemas, visit http://your-server:port/gradio_api/mcp/schema or go to the "View API" link in the footer of your Gradio app, and then click on "MCP".
2. **Environment variable support**. There are two ways to enable the MCP server functionality:
* Using the `mcp_server` parameter, as shown above:
```python
demo.launch(mcp_server=True)
```
* Using environment variables:
```bash
export GRADIO_MCP_SERVER=True
```
3. **File Handling**: The Gradio MCP server automatically handles file data conversions, including:
- Processing image files and returning them in the correct format
- Managing temporary file storage
By default, the Gradio MCP server accepts input images and files as full URLs ("http://..." or "https:/..."). For convenience, an additional STDIO-based MCP server is also generated, which can be used to upload files to any remote Gradio app and which returns a URL that can be used for subsequent tool calls.
4. **Hosted MCP Servers on 🤗 Spaces**: You can publish your Gradio application for free on Hugging Face Spaces, which will allow you to have a free hosted MCP server. Here's an example of such a Space: https://huggingface.co/spaces/abidlabs/mcp-tools. Notice that you can add this config to your MCP Client to start using the tools from this Space immediately:
```
{
"mcpServers": {
"gradio": {
"url": "https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/"
}
}
}
```
<video src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/mcp_guide1.mp4" style="width:100%" controls preload> </video>
|
Key features of the Gradio <> MCP Integration
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
If there's an existing Space that you'd like to use an MCP server, you'll need to do three things:
1. First, [duplicate the Space](https://huggingface.co/docs/hub/en/spaces-more-ways-to-createduplicating-a-space) if it is not your own Space. This will allow you to make changes to the app. If the Space requires a GPU, set the hardware of the duplicated Space to be same as the original Space. You can make it either a public Space or a private Space, since it is possible to use either as an MCP server, as described below.
2. Then, add docstrings to the functions that you'd like the LLM to be able to call as a tool. The docstring should be in the same format as the example code above.
3. Finally, add `mcp_server=True` in `.launch()`.
That's it!
|
Converting an Existing Space
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
You can use either a public Space or a private Space as an MCP server. If you'd like to use a private Space as an MCP server (or a ZeroGPU Space with your own quota), then you will need to provide your [Hugging Face token](https://huggingface.co/settings/token) when you make your request. To do this, simply add it as a header in your config like this:
```
{
"mcpServers": {
"gradio": {
"url": "https://abidlabs-mcp-tools.hf.space/gradio_api/mcp/",
"headers": {
"Authorization": "Bearer <YOUR-HUGGING-FACE-TOKEN>"
}
}
}
}
```
|
Private Spaces
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
You may wish to authenticate users more precisely or let them provide other kinds of credentials or tokens in order to provide a custom experience for different users.
Gradio allows you to access the underlying `starlette.Request` that has made the tool call, which means that you can access headers, originating IP address, or any other information that is part of the network request. To do this, simply add a parameter in your function of the type `gr.Request`, and Gradio will automatically inject the request object as the parameter.
Here's an example:
```py
import gradio as gr
def echo_headers(x, request: gr.Request):
return str(dict(request.headers))
gr.Interface(echo_headers, "textbox", "textbox").launch(mcp_server=True)
```
This MCP server will simply ignore the user's input and echo back all of the headers from a user's request. One can build more complex apps using the same idea. See the [docs on `gr.Request`](https://www.gradio.app/main/docs/gradio/request) for more information (note that only the core Starlette attributes of the `gr.Request` object will be present, attributes such as Gradio's `.session_hash` will not be present).
Using the gr.Header class
A common pattern in MCP server development is to use authentication headers to call services on behalf of your users. Instead of using a `gr.Request` object like in the example above, you can use a `gr.Header` argument. Gradio will automatically extract that header from the incoming request (if it exists) and pass it to your function.
In the example below, the `X-API-Token` header is extracted from the incoming request and passed in as the `x_api_token` argument to `make_api_request_on_behalf_of_user`.
The benefit of using `gr.Header` is that the MCP connection docs will automatically display the headers you need to supply when connecting to the server! See the image below:
```python
import gradio as gr
def make_api_request_on_behalf_of_user(prompt: str, x_api_token: gr.Header):
"""M
|
Authentication and Credentials
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
the headers you need to supply when connecting to the server! See the image below:
```python
import gradio as gr
def make_api_request_on_behalf_of_user(prompt: str, x_api_token: gr.Header):
"""Make a request to everyone's favorite API.
Args:
prompt: The prompt to send to the API.
Returns:
The response from the API.
Raises:
AssertionError: If the API token is not valid.
"""
return "Hello from the API" if not x_api_token else "Hello from the API with token!"
demo = gr.Interface(
make_api_request_on_behalf_of_user,
[
gr.Textbox(label="Prompt"),
],
gr.Textbox(label="Response"),
)
demo.launch(mcp_server=True)
```

Sending Progress Updates
The Gradio MCP server automatically sends progress updates to your MCP Client based on the queue in the Gradio application. If you'd like to send custom progress updates, you can do so using the same mechanism as you would use to display progress updates in the UI of your Gradio app: by using the `gr.Progress` class!
Here's an example of how to do this:
$code_mcp_progress
[Here are the docs](https://www.gradio.app/docs/gradio/progress) for the `gr.Progress` class, which can also automatically track `tqdm` calls.
Note: by default, progress notifications are enabled for all MCP tools, even if the corresponding Gradio functions do not include a `gr.Progress`. However, this can add some overhead to the MCP tool (typically ~500ms). To disable progress notification, you can set `queue=False` in your Gradio event handler to skip the overhead related to subscribing to the queue's progress updates.
|
Authentication and Credentials
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
Gradio automatically sets the tool name based on the name of your function, and the description from the docstring of your function. But you may want to change how the description appears to your LLM. You can do this by using the `api_description` parameter in `Interface`, `ChatInterface`, or any event listener. This parameter takes three different kinds of values:
* `None` (default): the tool description is automatically created from the docstring of the function (or its parent's docstring if it does not have a docstring but inherits from a method that does.)
* `False`: no tool description appears to the LLM.
* `str`: an arbitrary string to use as the tool description.
In addition to modifying the tool descriptions, you can also toggle which tools appear to the LLM. You can do this by setting the `show_api` parameter, which is by default `True`. Setting it to `False` hides the endpoint from the API docs and from the MCP server. If you expose multiple tools, users of your app will also be able to toggle which tools they'd like to add to their MCP server by checking boxes in the "view MCP or API" panel.
Here's an example that shows the `api_description` and `show_api` parameters in actions:
$code_mcp_tools
|
Modifying Tool Descriptions
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
In addition to tools (which execute functions generally and are the default for any function exposed through the Gradio MCP integration), MCP supports two other important primitives: **resources** (for exposing data) and **prompts** (for defining reusable templates). Gradio provides decorators to easily create MCP servers with all three capabilities.
Creating MCP Resources
Use the `@gr.mcp.resource` decorator on any function to expose data through your Gradio app. Resources can be static (always available at a fixed URI) or templated (with parameters in the URI).
$code_mcp_resources_and_prompts
In this example:
- The `get_greeting` function is exposed as a resource with a URI template `greeting://{name}`
- When an MCP client requests `greeting://Alice`, it receives "Hello, Alice!"
- Resources can also return images and other types of files or binary data. In order to return non-text data, you should specify the `mime_type` parameter in `@gr.mcp.resource()` and return a Base64 string from your function.
Creating MCP Prompts
Prompts help standardize how users interact with your tools. They're especially useful for complex workflows that require specific formatting or multiple steps.
The `greet_user` function in the example above is decorated with `@gr.mcp.prompt()`, which:
- Makes it available as a prompt template in MCP clients
- Accepts parameters (`name` and `style`) to customize the output
- Returns a structured prompt that guides the LLM's behavior
|
MCP Resources and Prompts
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
So far, all of our MCP tools, resources, or prompts have corresponded to event listeners in the UI. This works well for functions that directly update the UI, but may not work if you wish to expose a "pure logic" function that should return raw data (e.g. a JSON object) without directly causing a UI update.
In order to expose such an MCP tool, you can create a pure Gradio API endpoint using `gr.api` (see [full docs here](https://www.gradio.app/main/docs/gradio/api)). Here's an example of creating an MCP tool that slices a list:
$code_mcp_tool_only
Note that if you use this approach, your function signature must be fully typed, including the return value, as these signature are used to determine the typing information for the MCP tool.
|
Adding MCP-Only Functions
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
In some cases, you may decide not to use Gradio's built-in integration and instead manually create an FastMCP Server that calls a Gradio app. This approach is useful when you want to:
- Store state / identify users between calls instead of treating every tool call completely independently
- Start the Gradio app MCP server when a tool is called (if you are running multiple Gradio apps locally and want to save memory / GPU)
This is very doable thanks to the [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) and the [MCP Python SDK](https://github.com/modelcontextprotocol/python-sdk)'s `FastMCP` class. Here's an example of creating a custom MCP server that connects to various Gradio apps hosted on [HuggingFace Spaces](https://huggingface.co/spaces) using the `stdio` protocol:
```python
from mcp.server.fastmcp import FastMCP
from gradio_client import Client
import sys
import io
import json
mcp = FastMCP("gradio-spaces")
clients = {}
def get_client(space_id: str) -> Client:
"""Get or create a Gradio client for the specified space."""
if space_id not in clients:
clients[space_id] = Client(space_id)
return clients[space_id]
@mcp.tool()
async def generate_image(prompt: str, space_id: str = "ysharma/SanaSprint") -> str:
"""Generate an image using Flux.
Args:
prompt: Text prompt describing the image to generate
space_id: HuggingFace Space ID to use
"""
client = get_client(space_id)
result = client.predict(
prompt=prompt,
model_size="1.6B",
seed=0,
randomize_seed=True,
width=1024,
height=1024,
guidance_scale=4.5,
num_inference_steps=2,
api_name="/infer"
)
return result
@mcp.tool()
async def run_dia_tts(prompt: str, space_id: str = "ysharma/Dia-1.6B") -> str:
"""Text-to-Speech Synthesis.
Args:
prompt: Text prompt describing the co
|
Gradio with FastMCP
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
return result
@mcp.tool()
async def run_dia_tts(prompt: str, space_id: str = "ysharma/Dia-1.6B") -> str:
"""Text-to-Speech Synthesis.
Args:
prompt: Text prompt describing the conversation between speakers S1, S2
space_id: HuggingFace Space ID to use
"""
client = get_client(space_id)
result = client.predict(
text_input=f"""{prompt}""",
audio_prompt_input=None,
max_new_tokens=3072,
cfg_scale=3,
temperature=1.3,
top_p=0.95,
cfg_filter_top_k=30,
speed_factor=0.94,
api_name="/generate_audio"
)
return result
if __name__ == "__main__":
import sys
import io
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8')
mcp.run(transport='stdio')
```
This server exposes two tools:
1. `run_dia_tts` - Generates a conversation for the given transcript in the form of `[S1]first-sentence. [S2]second-sentence. [S1]...`
2. `generate_image` - Generates images using a fast text-to-image model
To use this MCP Server with Claude Desktop (as MCP Client):
1. Save the code to a file (e.g., `gradio_mcp_server.py`)
2. Install the required dependencies: `pip install mcp gradio-client`
3. Configure Claude Desktop to use your server by editing the configuration file at `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows):
```json
{
"mcpServers": {
"gradio-spaces": {
"command": "python",
"args": [
"/absolute/path/to/gradio_mcp_server.py"
]
}
}
}
```
4. Restart Claude Desktop
Now, when you ask Claude about generating an image or transcribing audio, it can use your Gradio-powered tools to accomplish these tasks.
|
Gradio with FastMCP
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
use your Gradio-powered tools to accomplish these tasks.
|
Gradio with FastMCP
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
The MCP protocol is still in its infancy and you might see issues connecting to an MCP Server that you've built. We generally recommend using the [MCP Inspector Tool](https://github.com/modelcontextprotocol/inspector) to try connecting and debugging your MCP Server.
Here are some things that may help:
**1. Ensure that you've provided type hints and valid docstrings for your functions**
As mentioned earlier, Gradio reads the docstrings for your functions and the type hints of input arguments to generate the description of the tool and parameters. A valid function and docstring looks like this (note the "Args:" block with indented parameter names underneath):
```py
def image_orientation(image: Image.Image) -> str:
"""
Returns whether image is portrait or landscape.
Args:
image (Image.Image): The image to check.
"""
return "Portrait" if image.height > image.width else "Landscape"
```
Note: You can preview the schema that is created for your MCP server by visiting the `http://your-server:port/gradio_api/mcp/schema` URL.
**2. Try accepting input arguments as `str`**
Some MCP Clients do not recognize parameters that are numeric or other complex types, but all of the MCP Clients that we've tested accept `str` input parameters. When in doubt, change your input parameter to be a `str` and then cast to a specific type in the function, as in this example:
```py
def prime_factors(n: str):
"""
Compute the prime factorization of a positive integer.
Args:
n (str): The integer to factorize. Must be greater than 1.
"""
n_int = int(n)
if n_int <= 1:
raise ValueError("Input must be an integer greater than 1.")
factors = []
while n_int % 2 == 0:
factors.append(2)
n_int //= 2
divisor = 3
while divisor * divisor <= n_int:
while n_int % divisor == 0:
factors.append(divisor)
n_int //= divisor
divisor += 2
if n_int > 1:
factors.
|
Troubleshooting your MCP Servers
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
= 3
while divisor * divisor <= n_int:
while n_int % divisor == 0:
factors.append(divisor)
n_int //= divisor
divisor += 2
if n_int > 1:
factors.append(n_int)
return factors
```
**3. Ensure that your MCP Client Supports Streamable HTTP**
Some MCP Clients do not yet support streamable HTTP-based MCP Servers. In those cases, you can use a tool such as [mcp-remote](https://github.com/geelen/mcp-remote). First install [Node.js](https://nodejs.org/en/download/). Then, add the following to your own MCP Client config:
```
{
"mcpServers": {
"gradio": {
"command": "npx",
"args": [
"mcp-remote",
"http://your-server:port/gradio_api/mcp/"
]
}
}
}
```
**4. Restart your MCP Client and MCP Server**
Some MCP Clients require you to restart them every time you update the MCP configuration. Other times, if the connection between the MCP Client and servers breaks, you might need to restart the MCP server. If all else fails, try restarting both your MCP Client and MCP Servers!
|
Troubleshooting your MCP Servers
|
https://gradio.app/guides/building-mcp-server-with-gradio
|
Mcp - Building Mcp Server With Gradio Guide
|
As of version 5.36.0, Gradio now comes with a built-in MCP server that can upload files to a running Gradio application. In the `View API` page of the server, you should see the following code snippet if any of the tools require file inputs:
<img src="https://huggingface.co/datasets/freddyaboulton/bucket/resolve/main/MCPConnectionDocs.png">
The command to start the MCP server takes two arguments:
- The URL (or Hugging Face space id) of the gradio application to upload the files to. In this case, `http://127.0.0.1:7860`.
- The local directory on your computer with which the server is allowed to upload files from (`<UPLOAD_DIRECTORY>`). For security, please make this directory as narrow as possible to prevent unintended file uploads.
As stated in the image, you need to install [uv](https://docs.astral.sh/uv/getting-started/installation/) (a python package manager that can run python scripts) before connecting from your MCP client.
If you have gradio installed locally and you don't want to install uv, you can replace the `uvx` command with the path to gradio binary. It should look like this:
```json
"upload-files": {
"command": "<absoluate-path-to-gradio>",
"args": [
"upload-mcp",
"http://localhost:7860/",
"/Users/freddyboulton/Pictures"
]
}
```
After connecting to the upload server, your LLM agent will know when to upload files for you automatically!
<img src="https://huggingface.co/datasets/freddyaboulton/bucket/resolve/main/Ghibliafy.png">
|
Using the File Upload MCP Server
|
https://gradio.app/guides/file-upload-mcp
|
Mcp - File Upload Mcp Guide
|
In this guide, we've covered how you can connect to the Upload File MCP Server so that your agent can upload files before using Gradio MCP servers. Remember to set the `<UPLOAD_DIRECTORY>` as small as possible to prevent unintended file uploads!
|
Conclusion
|
https://gradio.app/guides/file-upload-mcp
|
Mcp - File Upload Mcp Guide
|
If you're using LLMs in your workflow, adding this server will augment them with just the right context on gradio - which makes your experience a lot faster and smoother.
<video src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/mcp-docs.mp4" style="width:100%" controls preload> </video>
The server is running on Spaces and was launched entirely using Gradio, you can see all the code [here](https://huggingface.co/spaces/gradio/docs-mcp). For more on building an mcp server with gradio, see the [previous guide](./building-an-mcp-client-with-gradio).
|
Why an MCP Server?
|
https://gradio.app/guides/using-docs-mcp
|
Mcp - Using Docs Mcp Guide
|
For clients that support streamable HTTP (e.g. Cursor, Windsurf, Cline), simply add the following configuration to your MCP config:
```json
{
"mcpServers": {
"gradio": {
"url": "https://gradio-docs-mcp.hf.space/gradio_api/mcp/"
}
}
}
```
We've included step-by-step instructions for Cursor below, but you can consult the docs for Windsurf [here](https://docs.windsurf.com/windsurf/mcp), and Cline [here](https://docs.cline.bot/mcp-servers/configuring-mcp-servers) which are similar to set up.
Cursor
1. Make sure you're using the latest version of Cursor, and go to Cursor > Settings > Cursor Settings > MCP
2. Click on '+ Add new global MCP server'
3. Copy paste this json into the file that opens and then save it.
```json
{
"mcpServers": {
"gradio": {
"url": "https://gradio-docs-mcp.hf.space/gradio_api/mcp/"
}
}
}
```
4. That's it! You should see the tools load and the status go green in the settings page. You may have to click the refresh icon or wait a few seconds.

Claude Desktop
1. Since Claude Desktop only supports stdio, you will need to [install Node.js](https://nodejs.org/en/download/) to get this to work.
2. Make sure you're using the latest version of Claude Desktop, and go to Claude > Settings > Developer > Edit Config
3. Open the file with your favorite editor and copy paste this json, then save the file.
```json
{
"mcpServers": {
"gradio": {
"command": "npx",
"args": [
"mcp-remote",
"https://gradio-docs-mcp.hf.space/gradio_api/mcp/"
]
}
}
}
```
4. Quit and re-open Claude Desktop, and you should be good to go. You should see it loaded in the Search and Tools icon or on the developer settings page.

|
Installing in the Clients
|
https://gradio.app/guides/using-docs-mcp
|
Mcp - Using Docs Mcp Guide
|
There are currently only two tools in the server: `gradio_docs_mcp_load_gradio_docs` and `gradio_docs_mcp_search_gradio_docs`.
1. `gradio_docs_mcp_load_gradio_docs`: This tool takes no arguments and will load an /llms.txt style summary of Gradio's latest, full documentation. Very useful context the LLM can parse before answering questions or generating code.
2. `gradio_docs_mcp_search_gradio_docs`: This tool takes a query as an argument and will run embedding search on Gradio's docs, guides, and demos to return the most useful context for the LLM to parse.
|
Tools
|
https://gradio.app/guides/using-docs-mcp
|
Mcp - Using Docs Mcp Guide
|
The Model Context Protocol (MCP) standardizes how applications provide context to LLMs. It allows Claude to interact with external tools, like image generators, file systems, or APIs, etc.
|
What is MCP?
|
https://gradio.app/guides/building-an-mcp-client-with-gradio
|
Mcp - Building An Mcp Client With Gradio Guide
|
- Python 3.10+
- An Anthropic API key
- Basic understanding of Python programming
|
Prerequisites
|
https://gradio.app/guides/building-an-mcp-client-with-gradio
|
Mcp - Building An Mcp Client With Gradio Guide
|
First, install the required packages:
```bash
pip install gradio anthropic mcp
```
Create a `.env` file in your project directory and add your Anthropic API key:
```
ANTHROPIC_API_KEY=your_api_key_here
```
|
Setup
|
https://gradio.app/guides/building-an-mcp-client-with-gradio
|
Mcp - Building An Mcp Client With Gradio Guide
|
The server provides tools that Claude can use. In this example, we'll create a server that generates images through [a HuggingFace space](https://huggingface.co/spaces/ysharma/SanaSprint).
Create a file named `gradio_mcp_server.py`:
```python
from mcp.server.fastmcp import FastMCP
import json
import sys
import io
import time
from gradio_client import Client
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', errors='replace')
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8', errors='replace')
mcp = FastMCP("huggingface_spaces_image_display")
@mcp.tool()
async def generate_image(prompt: str, width: int = 512, height: int = 512) -> str:
"""Generate an image using SanaSprint model.
Args:
prompt: Text prompt describing the image to generate
width: Image width (default: 512)
height: Image height (default: 512)
"""
client = Client("https://ysharma-sanasprint.hf.space/")
try:
result = client.predict(
prompt,
"0.6B",
0,
True,
width,
height,
4.0,
2,
api_name="/infer"
)
if isinstance(result, list) and len(result) >= 1:
image_data = result[0]
if isinstance(image_data, dict) and "url" in image_data:
return json.dumps({
"type": "image",
"url": image_data["url"],
"message": f"Generated image for prompt: {prompt}"
})
return json.dumps({
"type": "error",
"message": "Failed to generate image"
})
except Exception as e:
return json.dumps({
"type": "error",
"message": f"Error generating image: {str(e)}"
})
if __name__ == "__main__":
mcp.run(transport='stdio')
```
What this server does:
1. It creates an MCP server that exposes a `gene
|
Part 1: Building the MCP Server
|
https://gradio.app/guides/building-an-mcp-client-with-gradio
|
Mcp - Building An Mcp Client With Gradio Guide
|
"message": f"Error generating image: {str(e)}"
})
if __name__ == "__main__":
mcp.run(transport='stdio')
```
What this server does:
1. It creates an MCP server that exposes a `generate_image` tool
2. The tool connects to the SanaSprint model hosted on HuggingFace Spaces
3. It handles the asynchronous nature of image generation by polling for results
4. When an image is ready, it returns the URL in a structured JSON format
|
Part 1: Building the MCP Server
|
https://gradio.app/guides/building-an-mcp-client-with-gradio
|
Mcp - Building An Mcp Client With Gradio Guide
|
Now let's create a Gradio chat interface as MCP Client that connects Claude to our MCP server.
Create a file named `app.py`:
```python
import asyncio
import os
import json
from typing import List, Dict, Any, Union
from contextlib import AsyncExitStack
import gradio as gr
from gradio.components.chatbot import ChatMessage
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from anthropic import Anthropic
from dotenv import load_dotenv
load_dotenv()
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
class MCPClientWrapper:
def __init__(self):
self.session = None
self.exit_stack = None
self.anthropic = Anthropic()
self.tools = []
def connect(self, server_path: str) -> str:
return loop.run_until_complete(self._connect(server_path))
async def _connect(self, server_path: str) -> str:
if self.exit_stack:
await self.exit_stack.aclose()
self.exit_stack = AsyncExitStack()
is_python = server_path.endswith('.py')
command = "python" if is_python else "node"
server_params = StdioServerParameters(
command=command,
args=[server_path],
env={"PYTHONIOENCODING": "utf-8", "PYTHONUNBUFFERED": "1"}
)
stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
self.stdio, self.write = stdio_transport
self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))
await self.session.initialize()
response = await self.session.list_tools()
self.tools = [{
"name": tool.name,
"description": tool.description,
"input_schema": tool.inputSchema
} for tool in response.tools]
tool_names = [tool["name"] for tool in self.tools]
return f"Connected to MCP server.
|
Part 2: Building the MCP Client with Gradio
|
https://gradio.app/guides/building-an-mcp-client-with-gradio
|
Mcp - Building An Mcp Client With Gradio Guide
|
iption,
"input_schema": tool.inputSchema
} for tool in response.tools]
tool_names = [tool["name"] for tool in self.tools]
return f"Connected to MCP server. Available tools: {', '.join(tool_names)}"
def process_message(self, message: str, history: List[Union[Dict[str, Any], ChatMessage]]) -> tuple:
if not self.session:
return history + [
{"role": "user", "content": message},
{"role": "assistant", "content": "Please connect to an MCP server first."}
], gr.Textbox(value="")
new_messages = loop.run_until_complete(self._process_query(message, history))
return history + [{"role": "user", "content": message}] + new_messages, gr.Textbox(value="")
async def _process_query(self, message: str, history: List[Union[Dict[str, Any], ChatMessage]]):
claude_messages = []
for msg in history:
if isinstance(msg, ChatMessage):
role, content = msg.role, msg.content
else:
role, content = msg.get("role"), msg.get("content")
if role in ["user", "assistant", "system"]:
claude_messages.append({"role": role, "content": content})
claude_messages.append({"role": "user", "content": message})
response = self.anthropic.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1000,
messages=claude_messages,
tools=self.tools
)
result_messages = []
for content in response.content:
if content.type == 'text':
result_messages.append({
"role": "assistant",
"content": content.text
})
elif content.type == 'tool_use':
tool_name = content.name
tool_args = content.input
|
Part 2: Building the MCP Client with Gradio
|
https://gradio.app/guides/building-an-mcp-client-with-gradio
|
Mcp - Building An Mcp Client With Gradio Guide
|
ntent": content.text
})
elif content.type == 'tool_use':
tool_name = content.name
tool_args = content.input
result_messages.append({
"role": "assistant",
"content": f"I'll use the {tool_name} tool to help answer your question.",
"metadata": {
"title": f"Using tool: {tool_name}",
"log": f"Parameters: {json.dumps(tool_args, ensure_ascii=True)}",
"status": "pending",
"id": f"tool_call_{tool_name}"
}
})
result_messages.append({
"role": "assistant",
"content": "```json\n" + json.dumps(tool_args, indent=2, ensure_ascii=True) + "\n```",
"metadata": {
"parent_id": f"tool_call_{tool_name}",
"id": f"params_{tool_name}",
"title": "Tool Parameters"
}
})
result = await self.session.call_tool(tool_name, tool_args)
if result_messages and "metadata" in result_messages[-2]:
result_messages[-2]["metadata"]["status"] = "done"
result_messages.append({
"role": "assistant",
"content": "Here are the results from the tool:",
"metadata": {
"title": f"Tool Result for {tool_name}",
"status": "done",
"id": f"result_{tool_name}"
}
})
result_content = result.content
if isinstance(result_content, list):
result_content = "\n".join(str(item) for item in re
|
Part 2: Building the MCP Client with Gradio
|
https://gradio.app/guides/building-an-mcp-client-with-gradio
|
Mcp - Building An Mcp Client With Gradio Guide
|
})
result_content = result.content
if isinstance(result_content, list):
result_content = "\n".join(str(item) for item in result_content)
try:
result_json = json.loads(result_content)
if isinstance(result_json, dict) and "type" in result_json:
if result_json["type"] == "image" and "url" in result_json:
result_messages.append({
"role": "assistant",
"content": {"path": result_json["url"], "alt_text": result_json.get("message", "Generated image")},
"metadata": {
"parent_id": f"result_{tool_name}",
"id": f"image_{tool_name}",
"title": "Generated Image"
}
})
else:
result_messages.append({
"role": "assistant",
"content": "```\n" + result_content + "\n```",
"metadata": {
"parent_id": f"result_{tool_name}",
"id": f"raw_result_{tool_name}",
"title": "Raw Output"
}
})
except:
result_messages.append({
"role": "assistant",
"content": "```\n" + result_content + "\n```",
"metadata": {
"parent_id": f"result_{tool_name}",
"id": f"raw_result_{tool_name}",
"title": "Raw Output"
}
})
|
Part 2: Building the MCP Client with Gradio
|
https://gradio.app/guides/building-an-mcp-client-with-gradio
|
Mcp - Building An Mcp Client With Gradio Guide
|
"parent_id": f"result_{tool_name}",
"id": f"raw_result_{tool_name}",
"title": "Raw Output"
}
})
claude_messages.append({"role": "user", "content": f"Tool result for {tool_name}: {result_content}"})
next_response = self.anthropic.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1000,
messages=claude_messages,
)
if next_response.content and next_response.content[0].type == 'text':
result_messages.append({
"role": "assistant",
"content": next_response.content[0].text
})
return result_messages
client = MCPClientWrapper()
def gradio_interface():
with gr.Blocks(title="MCP Weather Client") as demo:
gr.Markdown("MCP Weather Assistant")
gr.Markdown("Connect to your MCP weather server and chat with the assistant")
with gr.Row(equal_height=True):
with gr.Column(scale=4):
server_path = gr.Textbox(
label="Server Script Path",
placeholder="Enter path to server script (e.g., weather.py)",
value="gradio_mcp_server.py"
)
with gr.Column(scale=1):
connect_btn = gr.Button("Connect")
status = gr.Textbox(label="Connection Status", interactive=False)
chatbot = gr.Chatbot(
value=[],
height=500,
show_copy_button=True,
avatar_images=("👤", "🤖")
)
with gr.Row(equal_height=True):
msg = gr.Textbox(
label="Your Question",
placeholder="Ask about weather or alerts (e.g., What's the weather in New York?)",
|
Part 2: Building the MCP Client with Gradio
|
https://gradio.app/guides/building-an-mcp-client-with-gradio
|
Mcp - Building An Mcp Client With Gradio Guide
|
w(equal_height=True):
msg = gr.Textbox(
label="Your Question",
placeholder="Ask about weather or alerts (e.g., What's the weather in New York?)",
scale=4
)
clear_btn = gr.Button("Clear Chat", scale=1)
connect_btn.click(client.connect, inputs=server_path, outputs=status)
msg.submit(client.process_message, [msg, chatbot], [chatbot, msg])
clear_btn.click(lambda: [], None, chatbot)
return demo
if __name__ == "__main__":
if not os.getenv("ANTHROPIC_API_KEY"):
print("Warning: ANTHROPIC_API_KEY not found in environment. Please set it in your .env file.")
interface = gradio_interface()
interface.launch(debug=True)
```
What this MCP Client does:
- Creates a friendly Gradio chat interface for user interaction
- Connects to the MCP server you specify
- Handles conversation history and message formatting
- Makes call to Claude API with tool definitions
- Processes tool usage requests from Claude
- Displays images and other tool outputs in the chat
- Sends tool results back to Claude for interpretation
|
Part 2: Building the MCP Client with Gradio
|
https://gradio.app/guides/building-an-mcp-client-with-gradio
|
Mcp - Building An Mcp Client With Gradio Guide
|
To run your MCP application:
- Start a terminal window and run the MCP Client:
```bash
python app.py
```
- Open the Gradio interface at the URL shown (typically http://127.0.0.1:7860)
- In the Gradio interface, you'll see a field for the MCP Server path. It should default to `gradio_mcp_server.py`.
- Click "Connect" to establish the connection to the MCP server.
- You should see a message indicating the server connection was successful.
|
Running the Application
|
https://gradio.app/guides/building-an-mcp-client-with-gradio
|
Mcp - Building An Mcp Client With Gradio Guide
|
Now you can chat with Claude and it will be able to generate images based on your descriptions.
Try prompts like:
- "Can you generate an image of a mountain landscape at sunset?"
- "Create an image of a cool tabby cat"
- "Generate a picture of a panda wearing sunglasses"
Claude will recognize these as image generation requests and automatically use the `generate_image` tool from your MCP server.
|
Example Usage
|
https://gradio.app/guides/building-an-mcp-client-with-gradio
|
Mcp - Building An Mcp Client With Gradio Guide
|
Here's the high-level flow of what happens during a chat session:
1. Your prompt enters the Gradio interface
2. The client forwards your prompt to Claude
3. Claude analyzes the prompt and decides to use the `generate_image` tool
4. The client sends the tool call to the MCP server
5. The server calls the external image generation API
6. The image URL is returned to the client
7. The client sends the image URL back to Claude
8. Claude provides a response that references the generated image
9. The Gradio chat interface displays both Claude's response and the image
|
How it Works
|
https://gradio.app/guides/building-an-mcp-client-with-gradio
|
Mcp - Building An Mcp Client With Gradio Guide
|
Now that you have a working MCP system, here are some ideas to extend it:
- Add more tools to your server
- Improve error handling
- Add private Huggingface Spaces with authentication for secure tool access
- Create custom tools that connect to your own APIs or services
- Implement streaming responses for better user experience
|
Next Steps
|
https://gradio.app/guides/building-an-mcp-client-with-gradio
|
Mcp - Building An Mcp Client With Gradio Guide
|
Congratulations! You've successfully built an MCP Client and Server that allows Claude to generate images based on text prompts. This is just the beginning of what you can do with Gradio and MCP. This guide enables you to build complex AI applications that can use Claude or any other powerful LLM to interact with virtually any external tool or service.
Read our other Guide on using [Gradio apps as MCP Servers](./building-mcp-server-with-gradio).
|
Conclusion
|
https://gradio.app/guides/building-an-mcp-client-with-gradio
|
Mcp - Building An Mcp Client With Gradio Guide
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.