YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)
CLIP v2 Camping Spot Detector
Fine-tuned CLIP model for identifying suitable camping locations from satellite imagery.
Model Details
- Base Model: OpenAI CLIP ViT-B/32
- Training Dataset: 3,161 satellite images (70% train, 15% val, 15% test)
- Training Duration: 30 epochs on Tesla T4 GPU
- Architecture: CLIP vision encoder + custom binary classification head
- Input: Satellite image (base64 encoded)
- Output: Campability score (0-100)
Performance Metrics
- Test Accuracy: 93.07%
- Precision: 95.35%
- Recall: 96.53%
- F1-Score: 95.94%
Performance Breakdown
- Campable spots detection: 96.53% recall
- False positive rate: 4.65%
Usage
Python (Local)
import torch
from PIL import Image
from transformers import CLIPProcessor, CLIPModel
# Load model and processor
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
checkpoint = torch.load("clip_v2_best.pth")
# Load image
image = Image.open("satellite_image.jpg")
# Process and classify
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
image_features = model.get_image_features(**inputs)
classifier_output = classifier(image_features)
score = torch.softmax(classifier_output, dim=1)[0][1] # Campable probability
print(f"Campability Score: {score * 100:.1f}%")
Hugging Face Inference API
import requests
import base64
API_URL = "https://api-inference.huggingface.co/models/Soviet-Bear/clip-camping-v2"
headers = {"Authorization": f"Bearer {HF_TOKEN}"}
with open("image.jpg", "rb") as f:
data = f.read()
response = requests.post(API_URL, headers=headers, data=data)
result = response.json()
Training Configuration
- Batch size: 32
- Learning rate: 0.001 (Adam optimizer)
- Epochs: 30
- Warmup: First 3 epochs
- Image size: 224x224
- Feature dimension: 512
Dataset
Training data sourced from:
- Park4Night camping locations (positive samples)
- OSM terrain unsuitable for camping (negative samples)
- Geographic distribution across multiple continents
Model Files
clip_v2_best.pth: Trained classifier weightsconfig.json: Model configurationmodel_index.json: Pipeline specificationtraining_results.json: Complete training metricstraining_history.png: Training/validation curves
Limitations
- Optimized for satellite imagery at 224x224 resolution
- Trained on specific terrain types; may not generalize to all regions
- Requires clear satellite imagery (cloud cover reduces accuracy)
Citation
If you use this model, please cite:
@model{clip-camping-v2,
title={CLIP v2 Camping Spot Detector},
author={Soviet-Bear},
year={2025}
}
License
This model derivative is based on OpenAI's CLIP model. See LICENSE for details.
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support