SynthRAD2025 β Task 1 & Task 2 Solutions
This repository contains our solutions for the MICCAI Grand Challenge β SynthRAD2025, focusing on Task 1 and Task 2.
Our team achieved 1st place in the post-challenge leaderboard for both tasks (with Task 1 also ranking 1st during the official test phase).
π Challenge Overview
- Task 1: MRI-to-CT synthesis (MR β sCT)
- Task 2: CBCT-to-CT synthesis (CBCT β sCT)
Our methods emphasize robust image synthesis, reproducible pipelines, and multi-region generalization.
File Descriptions
docker_task_1/
Contains all necessary files to build and run the Docker image for Task 1 (MR β sCT).process.py: Script that performs inference, converting MR images into synthetic CT (sCT).
docker_task_2/
Contains all necessary files to build and run the Docker image for Task 2 (CBCT β sCT).process.py: Script that performs inference, converting CBCT images into synthetic CT (sCT).
Normalization Config Files260_gt_nnUNetResEncUNetLPlans.json/540_gt_nnUNetResEncUNetLPlans.json: Normalization configuration for the Abdomen region.262_gt_nnUNetResEncUNetLPlans.json/542_gt_nnUNetResEncUNetLPlans.json: Normalization configuration for the Head & Neck region.264_gt_nnUNetResEncUNetLPlans.json/544_gt_nnUNetResEncUNetLPlans.json: Normalization configuration for the Thorax region.
These files are essential for inverse normalization, ensuring that the synthesized CT intensities are mapped back to their correct clinical ranges.
Dockerfile
Defines all steps and dependencies needed to build the Docker image. It ensures reproducibility and consistency across environments.base_algorithm/
Contains the baseline algorithm files provided by the official SynthRAD2025 template, serving as the foundation for our solution.build.sh
Shell script for automating the Docker build process.export.sh
Shell script for exporting the built Docker image into a compressed archive for submission or deployment.requirements.txt
Lists all Python dependencies required to run the code.revert_normalisation.py
Script to apply inverse normalization to synthesized CT outputs, restoring them to the correct intensity distributions for downstream evaluation.
π Getting Started
nnunet_results
Before starting inference, you also need to create a folder callednnunet_results/in docker_task_1/docker_task_2 and place your trained models under thennunet_results/directory so that inference can correctly locate and load them.
1. Build the Docker Image
cd docker_task_1
bash build.sh
To test the algorithm locally, you can run the Docker container with GPU support, memory limit, and a larger shared memory (`/dev/shm`) size (e.g., 8 GB).