File size: 5,136 Bytes
a77d0eb d94ffc7 a77d0eb d94ffc7 bf03b8e d94ffc7 bf03b8e d94ffc7 bf03b8e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 |
---
license: mit
task_categories:
- image-classification
- visual-question-answering
- image-to-text
tags:
- 3d-printing
- manufacturing
- quality-control
- vision-language
size_categories:
- 1K<n<10K
pretty_name: TL-Caxton - 3D Printing Quality Assessment Dataset
---
# 3D Printing Nozzle Images Dataset
### Dataset Summary
- **Task**: Vision-based flow rate estimation and extrusion quality assessment
- **Domain**: Additive Manufacturing / 3D Printing
- **Data Type**: RGB images with numerical annotations
- **Total Samples**: 4,048 images
- Training: 3,407 samples
- Validation: 331 samples
- Test: 310 samples
### Supported Tasks
1. **Flow Rate Regression**: Predict the flow rate percentage from camera images of the printing process
2. **Extrusion Quality Classification**: Classify prints as under-extruded (<90%), good extrusion (90-110%), or over-extruded (>110%)
3. **Vision-Language Modeling**: Generate natural language descriptions of print quality from images
4. **Visual Question Answering**: Answer questions about print parameters and quality from images
## Dataset Structure
### Data Fields
Each sample contains:
- **`img_path`** (string): Filename of the camera image
- **`flow_rate`** (float): Flow rate percentage value (ranging from ~39% to ~265%)
- **`nozzle_tip_x`** (int): X-coordinate of nozzle tip position in pixels
- **`nozzle_tip_y`** (int): Y-coordinate of nozzle tip position in pixels
### Data Splits
| Split | Samples | Percentage |
|-------|---------|------------|
| Train | 3,407 | 84.2% |
| Validation | 331 | 8.2% |
| Test | 310 | 7.6% |
| **Total** | **4,048** | **100%** |
### Qualitative Descriptions
The dataset includes JSON template files for generating natural language descriptions:
- **`general_statements.json`**: General observations about the 3D printing nozzle and process
- **`qual_good_extrusion.json`**: Descriptions of good extrusion quality (flow rate 90-110%)
- **`qual_under_extrusion.json`**: Descriptions of under-extrusion issues (flow rate < 90%)
- **`qual_over_extrusion.json`**: Descriptions of over-extrusion issues (flow rate > 110%)
- **`quant_templates.json`**: Templates for stating quantitative flow rate values
These templates enable synthetic generation of diverse natural language annotations for vision-language training.
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("cemag/tl-caxton")
# Access individual splits
train_data = dataset['train']
val_data = dataset['validation']
test_data = dataset['test']
# Example: Access a sample
sample = train_data[0]
print(f"Flow rate: {sample['flow_rate']}%")
print(f"Nozzle position: ({sample['nozzle_tip_x']}, {sample['nozzle_tip_y']})")
```
### Using with PyTorch
```python
from torch.utils.data import DataLoader
from PIL import Image
import os
class CIPHERDataset:
def __init__(self, dataset, image_dir, transform=None):
self.dataset = dataset
self.image_dir = image_dir
self.transform = transform
def __len__(self):
return len(self.dataset)
def __getitem__(self, idx):
sample = self.dataset[idx]
img_path = os.path.join(self.image_dir, sample['img_path'])
image = Image.open(img_path).convert('RGB')
if self.transform:
image = self.transform(image)
return {
'image': image,
'flow_rate': sample['flow_rate'],
'nozzle_tip': (sample['nozzle_tip_x'], sample['nozzle_tip_y'])
}
# Create dataset and dataloader
train_dataset = CIPHERDataset(train_data, 'images/', transform=your_transform)
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
```
### Vision-Language Training
```python
from data_utils import synthesize_answer, format_data
# Generate a natural language description for a sample
sample = train_data[0]
description = synthesize_answer(sample, general=True, quant=True, qual=True)
print(description)
# Example output:
# "This is the nozzle of a 3D printer. The observed flow rate is approximately
# 100%. Good extrusion occurs when a 3D printer delivers the exact amount of
# filament needed, resulting in strong, accurate, and visually appealing prints."
```
## Citation
If you use this dataset in your research, please cite:
```bibtex
@dataset{tl_caxton,
title={tl-Caxton: 3D Printing Quality Assessment Dataset},
author={cemag},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/datasets/cemag/tl-caxton}}
}
```
```bibtex
@article{MargadjiPattinson2025HybridReasoning,
title = {Hybrid Reasoning for Perception, Explanation, and Autonomous Action in Manufacturing},
author = {Margadji, Christos and Pattinson, Sebastian W.},
year = {2025},
note = {arXiv:2506.08462},
url = {https://arxiv.org/abs/2506.08462}
}
```
## License
This dataset is released under the MIT License.
## Contact
For questions or issues regarding this dataset, please open an issue on the dataset repository or email at cm2161@cam.ac.uk |