π Example Chute for Turbovision πͺ
This repository demonstrates how to deploy a Chute via the Turbovision CLI, hosted on Hugging Face Hub. It serves as a minimal example showcasing the required structure and workflow for integrating machine learning models, preprocessing, and orchestration into a reproducible Chute environment.
Repository Structure
The following two files must be present (in their current locations) for a successful deployment β their content can be modified as needed:
| File | Purpose |
|---|---|
miner.py |
Defines the ML model type(s), orchestration, and all pre/postprocessing logic. |
config.yml |
Specifies machine configuration (e.g., GPU type, memory, environment variables). |
Other files β e.g., model weights, utility scripts, or dependencies β are optional and can be included as needed for your model.
Note: Any required assets must be defined or contained within this repo, which is fully open-source, since all network-related operations (downloading challenge data, weights, etc.) are disabled inside the Chute.
Overview
Below is a high-level diagram showing the interaction between Huggingface, Chutes and Turbovision:
βββββββββββββββ ββββββββββββ ββββββββββββββββ
β HuggingFace β βββ> β Chutes β βββ> β Turbovision β
β Hub β β .ai β β Validator β
βββββββββββββββ ββββββββββββ ββββββββββββββββ
Local Testing
After editing the config.yml and miner.py and saving it into your Huggingface Repo, you will want to test it works locally.
- Copy the template file
scorevision/chute_template/turbovision_chute.py.j2as a python file calledmy_chute.pyand fill in the missing variables:
HF_REPO_NAME = "{{ huggingface_repository_name }}"
HF_REPO_REVISION = "{{ huggingface_repository_revision }}"
CHUTES_USERNAME = "{{ chute_username }}"
CHUTE_NAME = "{{ chute_name }}"
- Run the following command to build the chute locally (Caution: there are known issues with the docker location when running this on a mac):
chutes build my_chute:chute --local --public
- Run the name of the docker image just built (i.e.
CHUTE_NAME) and enter it:
docker run -p 8000:8000 -e CHUTES_EXECUTION_CONTEXT=REMOTE -it <image-name> /bin/bash
- Run the file from within the container:
chutes run my_chute:chute --dev --debug
- In another terminal, test the local endpoints to ensure there are no bugs:
# Health check
curl -X POST http://localhost:8000/health -d '{}'
# Prediction test
curl -X POST http://localhost:8000/predict -d '{"url": "https://scoredata.me/2025_03_14/35ae7a/h1_0f2ca0.mp4","meta": {}}'
Live Testing
If you have any chute with the same name (i.e. from a previous deployment), ensure you delete that first (or you will get an error when trying to build).
- List existing chutes:
chutes chutes list
Take note of the chute id that you wish to delete (if any):
chutes chutes delete <chute-id>
- You should also delete its associated image:
chutes images list
Take note of the chute image id:
chutes images delete <chute-image-id>
- Use Turbovision's CLI to build, deploy and commit on-chain:
sv -vv push
Note: You can skip the on-chain commit using
--no-commit. You can also specify a past huggingface revision to point to using--revisionand/or the local files you want to upload to your huggingface repo using--model-path.
- When completed, warm up the chute (if its cold π§):
You can confirm its status using chutes chutes list or chutes chutes get <chute-id> if you already know its id.
Note: Warming up can sometimes take a while but if the chute runs without errors (should be if you've tested locally first) and there are sufficient nodes (i.e. machines) available matching the
config.ymlyou specified, the chute should become hot π₯!
chutes warmup <chute-id>
- Test the chute's endpoints:
# Health check
curl -X POST https://<YOUR-CHUTE-SLUG>.chutes.ai/health -d '{}' -H "Authorization: Bearer $CHUTES_API_KEY"
# Prediction
curl -X POST https://<YOUR-CHUTE-SLUG>.chutes.ai/predict -d '{"url": "https://scoredata.me/2025_03_14/35ae7a/h1_0f2ca0.mp4","meta": {}}' -H "Authorization: Bearer $CHUTES_API_KEY"
- Test what your chute would get on a validator:
This also applies any validation/integrity checks which may fail if you did not use the Turbovision CLI above to deploy the chute:
sv -vv run-once