tl-caxton / README.md
cemag's picture
Update README.md
bf03b8e verified
metadata
license: mit
task_categories:
  - image-classification
  - visual-question-answering
  - image-to-text
tags:
  - 3d-printing
  - manufacturing
  - quality-control
  - vision-language
size_categories:
  - 1K<n<10K
pretty_name: TL-Caxton - 3D Printing Quality Assessment Dataset

3D Printing Nozzle Images Dataset

Dataset Summary

  • Task: Vision-based flow rate estimation and extrusion quality assessment
  • Domain: Additive Manufacturing / 3D Printing
  • Data Type: RGB images with numerical annotations
  • Total Samples: 4,048 images
    • Training: 3,407 samples
    • Validation: 331 samples
    • Test: 310 samples

Supported Tasks

  1. Flow Rate Regression: Predict the flow rate percentage from camera images of the printing process
  2. Extrusion Quality Classification: Classify prints as under-extruded (<90%), good extrusion (90-110%), or over-extruded (>110%)
  3. Vision-Language Modeling: Generate natural language descriptions of print quality from images
  4. Visual Question Answering: Answer questions about print parameters and quality from images

Dataset Structure

Data Fields

Each sample contains:

  • img_path (string): Filename of the camera image
  • flow_rate (float): Flow rate percentage value (ranging from ~39% to ~265%)
  • nozzle_tip_x (int): X-coordinate of nozzle tip position in pixels
  • nozzle_tip_y (int): Y-coordinate of nozzle tip position in pixels

Data Splits

Split Samples Percentage
Train 3,407 84.2%
Validation 331 8.2%
Test 310 7.6%
Total 4,048 100%

Qualitative Descriptions

The dataset includes JSON template files for generating natural language descriptions:

  • general_statements.json: General observations about the 3D printing nozzle and process
  • qual_good_extrusion.json: Descriptions of good extrusion quality (flow rate 90-110%)
  • qual_under_extrusion.json: Descriptions of under-extrusion issues (flow rate < 90%)
  • qual_over_extrusion.json: Descriptions of over-extrusion issues (flow rate > 110%)
  • quant_templates.json: Templates for stating quantitative flow rate values

These templates enable synthetic generation of diverse natural language annotations for vision-language training.

Usage

Loading the Dataset

from datasets import load_dataset

# Load the full dataset
dataset = load_dataset("cemag/tl-caxton")

# Access individual splits
train_data = dataset['train']
val_data = dataset['validation']
test_data = dataset['test']

# Example: Access a sample
sample = train_data[0]
print(f"Flow rate: {sample['flow_rate']}%")
print(f"Nozzle position: ({sample['nozzle_tip_x']}, {sample['nozzle_tip_y']})")

Using with PyTorch

from torch.utils.data import DataLoader
from PIL import Image
import os

class CIPHERDataset:
    def __init__(self, dataset, image_dir, transform=None):
        self.dataset = dataset
        self.image_dir = image_dir
        self.transform = transform
    
    def __len__(self):
        return len(self.dataset)
    
    def __getitem__(self, idx):
        sample = self.dataset[idx]
        img_path = os.path.join(self.image_dir, sample['img_path'])
        image = Image.open(img_path).convert('RGB')
        
        if self.transform:
            image = self.transform(image)
        
        return {
            'image': image,
            'flow_rate': sample['flow_rate'],
            'nozzle_tip': (sample['nozzle_tip_x'], sample['nozzle_tip_y'])
        }

# Create dataset and dataloader
train_dataset = CIPHERDataset(train_data, 'images/', transform=your_transform)
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)

Vision-Language Training

from data_utils import synthesize_answer, format_data

# Generate a natural language description for a sample
sample = train_data[0]
description = synthesize_answer(sample, general=True, quant=True, qual=True)
print(description)

# Example output:
# "This is the nozzle of a 3D printer. The observed flow rate is approximately 
#  100%. Good extrusion occurs when a 3D printer delivers the exact amount of 
#  filament needed, resulting in strong, accurate, and visually appealing prints."

Citation

If you use this dataset in your research, please cite:

@dataset{tl_caxton,
  title={tl-Caxton: 3D Printing Quality Assessment Dataset},
  author={cemag},
  year={2025},
  publisher={Hugging Face},
  howpublished={\url{https://huggingface.co/datasets/cemag/tl-caxton}}
}
@article{MargadjiPattinson2025HybridReasoning,
  title   = {Hybrid Reasoning for Perception, Explanation, and Autonomous Action in Manufacturing},
  author  = {Margadji, Christos and Pattinson, Sebastian W.},
  year    = {2025},
  note    = {arXiv:2506.08462},
  url     = {https://arxiv.org/abs/2506.08462}
}

License

This dataset is released under the MIT License.

Contact

For questions or issues regarding this dataset, please open an issue on the dataset repository or email at cm2161@cam.ac.uk