Global Chess Challenge - Starter Kit
Global Chess Challenge
This repository is the Submission template and Starter kit for the Global Chess Challenge! Clone the repository to compete now!
This repository contains:
- Documentation on how to submit your agent to the leaderboard
- The procedure for best practices and information on how we evaluate your agent
- Starter code for you to get started!
Table of Contents
- Competition Overview
- Challenge Description
- Tracks
- Evaluation Metrics
- Getting Started
- Frequently Asked Questions
- Important Links
๐ Competition Overview
Most chess players don't have regular access to a top coach. What they do have are their own games and a recurring question: "What should I have played here?" The Global Chess Challenge imagines a tool that looks at those positions, suggests a strong move, and explains the idea in simple language, so players can coach themselves using the games they already play.
This challenge asks you to build models that play legal chess moves and briefly explain their choices in natural language, while a world-class engine checks how well those moves hold up on the board. The challenge turns a familiar game into a testbed to see whether reasoning models can think clearly, play good moves, and talk about them in a way humans can follow.
โ๏ธ Challenge Description
The Global Chess Challenge asks participants to build a text-only chess agent that does two things at once: play a legal move and explain the idea behind it in simple language.
On each turn, your model receives a chess position as text and must respond with:
- A one-sentence rationale explaining the idea behind the move
- A legal move in UCI format
The environment verifies legality, evaluates move quality using Stockfish, and runs full games in tournaments to measure overall playing strength.
Input Format
For every turn, your agent receives:
- Position encoded as a FEN string
- Side to move (White or Black)
- List of legal moves in UCI format
Output Format
Your agent must return:
- A one-sentence rationale:
<think>...</think> - Exactly one move in UCI format:
<uci_move>...</uci_move>
๐ Getting Started
- Sign up to join the competition on the AIcrowd website.
- Clone this repo and start developing your agent.
- Develop your agent(s) following the template in how to write your own agent section.
- Submit your trained models using huggingface for evaluation.
โ๏ธ How to write your own agent?
Please follow the instructions in player_agents/README.md for instructions and examples on how to write your own chess agent for this competition.
๐ด How to start participating?
Clone the repository recursively:
bash git clone --recursive git@github.com:aicrowd/global-chess-challenge-2025-starter-kit.git cd global-chess-challenge-2025-starter-kit
In case you didn't clone with --recursive, you can do the following
bash cd global-chess-challenge-2025-starter-kit git submodule update --init --recursive
Install competition specific dependencies
bash pip install -r requirements.txt
Running LLM locally
Before running local evaluation, you need to start either a vLLM server or a Flask server in a separate terminal from the player_agents directory.
Option 1: Using vLLM (for LLM-based agents)
cd player_agents
pip install vllm
bash run_vllm.sh
Option 2: Using Flask for rule-based agents - (Note that rule based agents cannot be submitted, its only for local testing)
cd player_agents
# For random agent
python random_agent_flask_server.py
# OR for Stockfish agent
python stockfish_agent_flask_server.py
Keep this server running in the background while you run local evaluation.
Local testing
Test your agent locally using python local_evaluation.py.
Note: Make sure you have started either the vLLM server or Flask server (see Running LLM locally) in a separate terminal before running local evaluation.
Before you submit
Accept the Challenge Rules on the main challenge page by clicking on the Participate button.
๐ฎ How to Make a Submission
This guide walks you through the process of submitting your chess agent to the Global Chess Challenge 2025.
Prerequisites
Before making a submission, ensure you have:
- โ Accepted the Challenge Rules on the challenge page by clicking the Participate button
- โ
Installed AIcrowd CLI (included in
requirements.txt) - โ Logged in to AIcrowd via the CLI
- โ Prepared your model on Hugging Face
- โ Created a prompt template for your agent
Step 1: Login to AIcrowd
First, authenticate with AIcrowd:
aicrowd login
You'll be prompted to enter your AIcrowd API key. You can find your API key at: https://www.aicrowd.com/participants/me
Step 2: Prepare Your Model on Hugging Face
Your model must be hosted on Hugging Face. You can use:
- A public model (e.g.,
Qwen/Qwen3-0.6B) - Your own fine-tuned model
- A private/gated model (requires additional setup - see below)
Using Private or Gated Models
If your model is private or gated, you need to grant AIcrowd access. See docs/huggingface-gated-models.md for detailed instructions.
Step 3: Create Your Prompt Template
Your prompt template should be a Jinja file that formats the chess position and legal moves for your model. Examples are available in the player_agents/ directory:
llm_agent_prompt_template.jinja- For general LLM agentssft_agent_prompt_template.jinja- For supervised fine-tuned agentsrandom_agent_prompt_template.jinja- Minimal template example
Step 4: Configure Your Submission
Edit the aicrowd_submit.sh file with your submission details:
# Configuration variables
CHALLENGE="global-chess-challenge-2025"
HF_REPO="YOUR_HF_USERNAME/YOUR_MODEL_NAME" # e.g., "Qwen/Qwen3-0.6B"
HF_REPO_TAG="main" # or specific branch/tag
PROMPT_TEMPLATE="player_agents/YOUR_PROMPT_TEMPLATE.jinja"
Configuration Parameters:
- CHALLENGE: The challenge identifier (keep as
global-chess-challenge-2025) - HF_REPO: Your Hugging Face model repository (format:
username/model-name) - HF_REPO_TAG: The branch or tag to use (typically
main) - PROMPT_TEMPLATE: Path to your prompt template file
Neuron (Trainium) submissions + vLLM tuning
If you're submitting a model that will run on AWS Neuron hardware (Trainium), make sure you pick the correct --neuron.model-type and review the supported --vllm.* flags you can pass to aicrowd submit-model.
See: docs/neuron-and-vllm-tuning.md
Step 5: Submit Your Model
Once configured, run the submission script:
bash aicrowd_submit.sh
Or submit directly using the AIcrowd CLI:
aicrowd submit-model \
--challenge "global-chess-challenge-2025" \
--hf-repo "YOUR_HF_USERNAME/YOUR_MODEL_NAME" \
--hf-repo-tag "main" \
--prompt-template-path "player_agents/YOUR_PROMPT_TEMPLATE.jinja"
โ Frequently Asked Questions
How are games evaluated?
Games are played in round-robin tournaments with ACPL ratings determining the final rankings. Each move is also checked for legality and compared against Stockfish for calculating CPL.
Can I use external tools?
Your agent must be self-contained and run without network access during evaluation. You can use Stockfish locally during training. During inference, we only run the LLM.
Best of Luck :tada: :tada:
Model tree for alexneakameni/Qwen3-4B-Instruct-2507-chess-grpo
Base model
Qwen/Qwen3-4B-Instruct-2507