Datasets:
Tasks:
Image Classification
Modalities:
Image
Formats:
parquet
Sub-tasks:
multi-class-image-classification
Languages:
English
Size:
100K - 1M
License:
docs: fill in dataset card sections (bias, limitations, curation rationale)
#6
by
samtuckervegan - opened
README.md
CHANGED
|
@@ -336,51 +336,64 @@ The data instances have the following fields:
|
|
| 336 |
|
| 337 |
### Curation Rationale
|
| 338 |
|
| 339 |
-
|
| 340 |
|
| 341 |
### Source Data
|
| 342 |
|
| 343 |
#### Initial Data Collection and Normalization
|
| 344 |
|
| 345 |
-
|
| 346 |
|
| 347 |
#### Who are the source language producers?
|
| 348 |
|
| 349 |
-
|
| 350 |
|
| 351 |
### Annotations
|
| 352 |
|
| 353 |
#### Annotation process
|
| 354 |
|
| 355 |
-
|
| 356 |
|
| 357 |
#### Who are the annotators?
|
| 358 |
|
| 359 |
-
|
| 360 |
|
| 361 |
### Personal and Sensitive Information
|
| 362 |
|
| 363 |
-
|
| 364 |
|
| 365 |
## Considerations for Using the Data
|
| 366 |
|
| 367 |
### Social Impact of Dataset
|
| 368 |
|
| 369 |
-
|
|
|
|
|
|
|
| 370 |
|
| 371 |
### Discussion of Biases
|
| 372 |
|
| 373 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 374 |
|
| 375 |
### Other Known Limitations
|
| 376 |
|
| 377 |
-
|
|
|
|
|
|
|
|
|
|
| 378 |
|
| 379 |
## Additional Information
|
| 380 |
|
| 381 |
### Dataset Curators
|
| 382 |
|
| 383 |
-
|
| 384 |
|
| 385 |
### Licensing Information
|
| 386 |
|
|
@@ -408,4 +421,4 @@ LICENSE AGREEMENT
|
|
| 408 |
|
| 409 |
### Contributions
|
| 410 |
|
| 411 |
-
Thanks to [@nateraw](https://github.com/nateraw) for adding this dataset.
|
|
|
|
| 336 |
|
| 337 |
### Curation Rationale
|
| 338 |
|
| 339 |
+
Food-101 was constructed to create a challenging fine-grained image classification benchmark for computer vision research. The 101 categories were chosen to represent dishes commonly photographed and shared on the social platform Foodspotting.com, with an intentional emphasis on visually distinct dishes to make category discrimination non-trivial. The benchmark was designed to evaluate models capable of discriminating between foods that differ subtly in appearance (e.g., various noodle dishes) as well as dishes with high intra-class variance (e.g., pizza with many possible toppings).
|
| 340 |
|
| 341 |
### Source Data
|
| 342 |
|
| 343 |
#### Initial Data Collection and Normalization
|
| 344 |
|
| 345 |
+
Images were retrieved from [Foodspotting](http://www.foodspotting.com/), a social food photography platform where users shared photos of dishes eaten at restaurants. Category labels were assigned using the platform's dish taxonomy. Each category was populated with 1,000 images: 750 for training and 250 for testing. Test images were manually reviewed for label correctness; training images were left uncleaned and intentionally retain label noise and image artifacts. All images were resized so that the maximum side length is 512 pixels.
|
| 346 |
|
| 347 |
#### Who are the source language producers?
|
| 348 |
|
| 349 |
+
The images were contributed by Foodspotting.com users photographing dishes primarily in restaurant settings. The geographic distribution of source photos reflects the user base of Foodspotting.com, which was predominantly North American and Western European at the time of collection (c. 2014). Class labels are in English.
|
| 350 |
|
| 351 |
### Annotations
|
| 352 |
|
| 353 |
#### Annotation process
|
| 354 |
|
| 355 |
+
Test set labels (250 images per class) were verified by human annotators. Training set labels (750 images per class) were assigned automatically based on the Foodspotting dish taxonomy and were not manually reviewed, resulting in a known level of label noise in the training split.
|
| 356 |
|
| 357 |
#### Who are the annotators?
|
| 358 |
|
| 359 |
+
Test set annotations were reviewed by the dataset authors at ETH Zurich. Training set labels derive from Foodspotting.com's crowdsourced dish tagging and were not independently verified.
|
| 360 |
|
| 361 |
### Personal and Sensitive Information
|
| 362 |
|
| 363 |
+
Images contain photographs of restaurant dishes. Some images may incidentally include people's hands or partial faces in the background, as photos were taken in social dining settings. No systematic attempt was made to identify or redact personal information.
|
| 364 |
|
| 365 |
## Considerations for Using the Data
|
| 366 |
|
| 367 |
### Social Impact of Dataset
|
| 368 |
|
| 369 |
+
Food-101 has been widely adopted as a benchmark for food image classification models, which are deployed in applications including restaurant menu recognition, dietary logging, nutrition estimation, and food recommendation systems. Models trained on this benchmark may be integrated into consumer products that influence dietary choices, medical nutrition tracking, and health recommendations.
|
| 370 |
+
|
| 371 |
+
The dataset's category distribution reflects dishes photographed on a predominantly North American and European social platform. Models trained on Food-101 may underperform on dishes from cuisines not well represented in the benchmark. Practitioners deploying food classification models in global or multicultural contexts should evaluate performance across the specific cuisines relevant to their use case before deployment.
|
| 372 |
|
| 373 |
### Discussion of Biases
|
| 374 |
|
| 375 |
+
**Category selection bias:** The 101 categories were drawn from dishes popular on Foodspotting.com as of c. 2014, reflecting the platform's predominantly North American and Western European user base. Many cuisines with large global populations are absent or underrepresented (e.g., most Sub-Saharan African, Central Asian, and many South American regional cuisines).
|
| 376 |
+
|
| 377 |
+
**Dietary category imbalance:** Of the 101 classes, approximately 12-15 are predominantly or exclusively plant-based (including `edamame`, `falafel`, `guacamole`, `hummus`, `seaweed_salad`, `beet_salad`, and `greek_salad`). The majority of categories contain or are defined by animal-derived ingredients. Models fine-tuned on Food-101 and subsequently used for dietary classification tasks (e.g., identifying plant-based or vegetarian dishes) should be evaluated carefully: the skewed class distribution may cause such models to underperform on plant-based categories relative to their performance on the overall benchmark.
|
| 378 |
+
|
| 379 |
+
**Label noise in training split:** Training images were explicitly not cleaned, as noted by the original authors. Images sometimes carry incorrect labels or depict foods that visually resemble but differ from the target category. This noise affects the reliability of training signal, particularly for categories with high visual similarity.
|
| 380 |
+
|
| 381 |
+
**Photography style bias:** All images come from a social photography platform where users photograph prepared dishes in restaurant settings. Home-cooked meals, street food, or regional variations of the same dish may not be well captured. Image composition, lighting, and presentation style reflect the social photography norms of the early 2010s.
|
| 382 |
+
|
| 383 |
+
**Image recency:** Data was collected circa 2014. Food presentation styles, plating aesthetics, and the relative popularity of specific dishes have evolved since then.
|
| 384 |
|
| 385 |
### Other Known Limitations
|
| 386 |
|
| 387 |
+
- The training split intentionally contains noisy labels. Performance metrics computed on the training set are not reliable; only test-set metrics should be reported.
|
| 388 |
+
- With 250 test images per class, the benchmark may have insufficient statistical power to detect performance differences for rare presentation styles or long-tail variations within a category.
|
| 389 |
+
- The dataset does not include nutritional metadata, ingredient lists, or preparation method information. It cannot be used directly for nutrition analysis or ingredient detection without augmentation from external sources such as [Open Food Facts](https://world.openfoodfacts.org/) or [USDA FoodData Central](https://fdc.nal.usda.gov/).
|
| 390 |
+
- Extended benchmarks that include a broader range of international cuisines include [ETHZ Food-256](https://www.vision.ee.ethz.ch/datasets_extra/food-256/) (256 categories) and [UEC Food-100/256](http://foodcam.mobi/dataset.html) (Japanese cuisine focus).
|
| 391 |
|
| 392 |
## Additional Information
|
| 393 |
|
| 394 |
### Dataset Curators
|
| 395 |
|
| 396 |
+
The Food-101 dataset was created by Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool at ETH Zurich (Swiss Federal Institute of Technology), published at the European Conference on Computer Vision (ECCV) 2014. The dataset homepage is maintained by the Computer Vision Laboratory, ETH Zurich: https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/
|
| 397 |
|
| 398 |
### Licensing Information
|
| 399 |
|
|
|
|
| 421 |
|
| 422 |
### Contributions
|
| 423 |
|
| 424 |
+
Thanks to [@nateraw](https://github.com/nateraw) for adding this dataset.
|