problem_id
stringlengths
1
66
category
stringclasses
2 values
statement
stringlengths
0
20.2k
config
stringlengths
20
380
gemm_optimization/k_skewed
research
GEMM Optimization Problem ========================= Problem Setting --------------- Design and optimize high-performance Triton kernels for General Matrix-Matrix Multiplication (GEMM) on GPU. This problem focuses on implementing efficient matrix multiplication kernels using Triton's JIT compilation system. The challenge involves optimizing: - **Memory access patterns**: Efficient loading and storing of matrix data - **Block tiling**: Optimal block sizes for GPU execution - **Autotuning**: Leveraging Triton's autotuning capabilities - **Activation functions**: Implementing GELU activation within the kernel - **Performance benchmarking**: Achieving speedup over baseline implementations Target ------ - **Primary**: Maximize geometric mean speedup over baseline (higher is better) - **Secondary**: Ensure correctness across diverse matrix shapes - **Tertiary**: Minimize kernel launch overhead and memory usage API Specification ----------------- Implement a `Solution` class that returns a Triton kernel implementation: ```python class Solution: def solve(self, spec_path: str = None) -> dict: """ Returns a dict with either: - {"code": "python_code_string"} - {"program_path": "path/to/kernel.py"} """ # Your implementation pass ``` Your kernel implementation must provide: ```python import torch import triton import triton.language as tl def matmul(a: torch.Tensor, b: torch.Tensor) -> torch.Tensor: """ Matrix multiplication with GELU activation. Args: a: Input tensor of shape (M, K) b: Input tensor of shape (K, N) Returns: Output tensor of shape (M, N) with GELU activation applied """ pass ``` Required GELU Implementation: ```python @triton.jit def gelu(x): return x * 0.5 * (1.0 + tl.extra.cuda.libdevice.erf(x * 0.7071067811865476)) ``` API Usage Notes --------------- - The evaluator looks for a `matmul` function in the module namespace - Function must handle tensor strides and memory layouts correctly - Must use Triton JIT compilation for kernel definition - Should leverage Triton's autotuning features for optimization - Kernel must apply GELU activation to the result before returning Scoring (0-100) --------------- Performance is measured against baseline implementations: ``` geometric_mean_speedup = geometric_mean(answer_times / baseline_times) raw_score = min(geometric_mean_speedup, 3.0) # Cap at 3x speedup score = (raw_score - 1.0) / 2.0 * 100 # Map 1x-3x to 0-100 ``` - 0 points = No speedup (1x baseline performance) - 50 points = 2x speedup over baseline - 100 points = 3x+ speedup over baseline Evaluation Details (K-skewed variant) ------------------------------------ - Shapes emphasize very small and very large K with varied (M, N): - Base (M, N): (1024,1024), (2048,512), (512,2048), (4096,256), (256,4096) - K values: small K = [32, 48, 64, 96, 128]; huge K = [3072, 4096, 6144, 8192] - Correctness verified with tolerance: rtol=1e-2, atol=5e-3 - Performance measured using median execution time - Requires CUDA backend and GPU support
dependencies: uv_project: resources datasets: [] tag: hpc runtime: docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)"
gemm_optimization/near_tile
research
GEMM Optimization Problem ========================= Problem Setting --------------- Design and optimize high-performance Triton kernels for General Matrix-Matrix Multiplication (GEMM) on GPU. This problem focuses on implementing efficient matrix multiplication kernels using Triton's JIT compilation system. The challenge involves optimizing: - **Memory access patterns**: Efficient loading and storing of matrix data - **Block tiling**: Optimal block sizes for GPU execution - **Autotuning**: Leveraging Triton's autotuning capabilities - **Activation functions**: Implementing GELU activation within the kernel - **Performance benchmarking**: Achieving speedup over baseline implementations Target ------ - **Primary**: Maximize geometric mean speedup over baseline (higher is better) - **Secondary**: Ensure correctness across diverse matrix shapes - **Tertiary**: Minimize kernel launch overhead and memory usage API Specification ----------------- Implement a `Solution` class that returns a Triton kernel implementation: ```python class Solution: def solve(self, spec_path: str = None) -> dict: """ Returns a dict with either: - {"code": "python_code_string"} - {"program_path": "path/to/kernel.py"} """ # Your implementation pass ``` Your kernel implementation must provide: ```python import torch import triton import triton.language as tl def matmul(a: torch.Tensor, b: torch.Tensor) -> torch.Tensor: """ Matrix multiplication with GELU activation. Args: a: Input tensor of shape (M, K) b: Input tensor of shape (K, N) Returns: Output tensor of shape (M, N) with GELU activation applied """ pass ``` Required GELU Implementation: ```python @triton.jit def gelu(x): return x * 0.5 * (1.0 + tl.extra.cuda.libdevice.erf(x * 0.7071067811865476)) ``` API Usage Notes --------------- - The evaluator looks for a `matmul` function in the module namespace - Function must handle tensor strides and memory layouts correctly - Must use Triton JIT compilation for kernel definition - Should leverage Triton's autotuning features for optimization - Kernel must apply GELU activation to the result before returning Scoring (0-100) --------------- Performance is measured against baseline implementations: ``` geometric_mean_speedup = geometric_mean(answer_times / baseline_times) raw_score = min(geometric_mean_speedup, 3.0) # Cap at 3x speedup score = (raw_score - 1.0) / 2.0 * 100 # Map 1x-3x to 0-100 ``` - 0 points = No speedup (1x baseline performance) - 50 points = 2x speedup over baseline - 100 points = 3x+ speedup over baseline Evaluation Details (near-tile variant) ------------------------------------- - Shapes clustered around tile boundaries (tile M,N=128, K=64), including +/-1 and +7: - M in {127,128,129,135, 255, 385, 633} - N in {127,128,129,135, 257, 383, 643} - K in {63,64,65,71, 129, 191, 325} - Only positive dimensions up to 8192 are included; Cartesian product filtered to limits - Correctness verified with tolerance: rtol=1e-2, atol=5e-3 - Performance measured using median execution time - Requires CUDA backend and GPU support Implementation Notes for Solution Authors ---------------------------------------- - Triton `tl.arange(0, BLOCK_*)` requires the range to be a power of two. Choose `BLOCK_M`, `BLOCK_N`, and especially `BLOCK_K` from powers of two (e.g., 32/64/128/256) to avoid compilation errors. - Return tensor dtype must match input dtype (fp16/bf16/fp32). Accumulate in fp32 inside the kernel, but allocate the output with `dtype=a.dtype` to pass correctness checks. - Provide a `Solution.solve()` that returns a static code string via `{ "code": python_source }`. Avoid reflection-based approaches (e.g., `inspect.getsource`) as modules are imported under different names during evaluation. - Respect arbitrary input strides; compute element-wise strides and use masked loads/stores for tail tiles. - Autotuning: include strides in the autotune key (e.g., `a_stride_am`, `a_stride_ak`, `b_stride_bk`, `b_stride_bn`) to ensure correct kernel specialization across layouts. - Recommended tile sets to cover near-tile cases: - `BLOCK_M/N`: {64, 128, 256} - `BLOCK_K`: {32, 64, 128} (avoid non-powers like 80)
dependencies: uv_project: resources datasets: [] tag: hpc runtime: docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)"
gemm_optimization/rectangles
research
GEMM Optimization Problem ========================= Problem Setting --------------- Design and optimize high-performance Triton kernels for General Matrix-Matrix Multiplication (GEMM) on GPU. This problem focuses on implementing efficient matrix multiplication kernels using Triton's JIT compilation system. The challenge involves optimizing: - **Memory access patterns**: Efficient loading and storing of matrix data - **Block tiling**: Optimal block sizes for GPU execution - **Autotuning**: Leveraging Triton's autotuning capabilities - **Activation functions**: Implementing GELU activation within the kernel - **Performance benchmarking**: Achieving speedup over baseline implementations Target ------ - **Primary**: Maximize geometric mean speedup over baseline (higher is better) - **Secondary**: Ensure correctness across diverse matrix shapes - **Tertiary**: Minimize kernel launch overhead and memory usage API Specification ----------------- Implement a `Solution` class that returns a Triton kernel implementation: ```python class Solution: def solve(self, spec_path: str = None) -> dict: """ Returns a dict with either: - {"code": "python_code_string"} - {"program_path": "path/to/kernel.py"} """ # Your implementation pass ``` Your kernel implementation must provide: ```python import torch import triton import triton.language as tl def matmul(a: torch.Tensor, b: torch.Tensor) -> torch.Tensor: """ Matrix multiplication with GELU activation. Args: a: Input tensor of shape (M, K) b: Input tensor of shape (K, N) Returns: Output tensor of shape (M, N) with GELU activation applied """ pass ``` Required GELU Implementation: ```python @triton.jit def gelu(x): return x * 0.5 * (1.0 + tl.extra.cuda.libdevice.erf(x * 0.7071067811865476)) ``` API Usage Notes --------------- - The evaluator looks for a `matmul` function in the module namespace - Function must handle tensor strides and memory layouts correctly - Must use Triton JIT compilation for kernel definition - Should leverage Triton's autotuning features for optimization - Kernel must apply GELU activation to the result before returning Scoring (0-100) --------------- Performance is measured against baseline implementations: ``` geometric_mean_speedup = geometric_mean(answer_times / baseline_times) raw_score = min(geometric_mean_speedup, 3.0) # Cap at 3x speedup score = (raw_score - 1.0) / 2.0 * 100 # Map 1x-3x to 0-100 ``` - 0 points = No speedup (1x baseline performance) - 50 points = 2x speedup over baseline - 100 points = 3x+ speedup over baseline Evaluation Details (rectangles variant) -------------------------------------- - Tall/skinny and short/wide rectangles; K is balanced: - Tall/skinny: M in [1024, 2048, 4096, 8192], N in [64, 128, 192, 256], K in [512, 1024, 2048] - Short/wide: M in [64, 128, 192, 256], N in [1024, 2048, 4096, 8192], K in [512, 1024, 2048] - Correctness verified with tolerance: rtol=1e-2, atol=5e-3 - Performance measured using median execution time - Requires CUDA backend and GPU support
dependencies: uv_project: resources datasets: [] tag: hpc runtime: docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)"
gemm_optimization/squares
research
GEMM Optimization Problem ========================= Problem Setting --------------- Design and optimize high-performance Triton kernels for General Matrix-Matrix Multiplication (GEMM) on GPU. This problem focuses on implementing efficient matrix multiplication kernels using Triton's JIT compilation system. The challenge involves optimizing: - **Memory access patterns**: Efficient loading and storing of matrix data - **Block tiling**: Optimal block sizes for GPU execution - **Autotuning**: Leveraging Triton's autotuning capabilities - **Activation functions**: Implementing GELU activation within the kernel - **Performance benchmarking**: Achieving speedup over baseline implementations Target ------ - **Primary**: Maximize geometric mean speedup over baseline (higher is better) - **Secondary**: Ensure correctness across diverse matrix shapes - **Tertiary**: Minimize kernel launch overhead and memory usage API Specification ----------------- Implement a `Solution` class that returns a Triton kernel implementation: ```python class Solution: def solve(self, spec_path: str = None) -> dict: """ Returns a dict with either: - {"code": "python_code_string"} - {"program_path": "path/to/kernel.py"} """ # Your implementation pass ``` Your kernel implementation must provide: ```python import torch import triton import triton.language as tl def matmul(a: torch.Tensor, b: torch.Tensor) -> torch.Tensor: """ Matrix multiplication with GELU activation. Args: a: Input tensor of shape (M, K) b: Input tensor of shape (K, N) Returns: Output tensor of shape (M, N) with GELU activation applied """ pass ``` Required GELU Implementation: ```python @triton.jit def gelu(x): return x * 0.5 * (1.0 + tl.extra.cuda.libdevice.erf(x * 0.7071067811865476)) ``` API Usage Notes --------------- - The evaluator looks for a `matmul` function in the module namespace - Function must handle tensor strides and memory layouts correctly - Must use Triton JIT compilation for kernel definition - Should leverage Triton's autotuning features for optimization - Kernel must apply GELU activation to the result before returning Scoring (0-100) --------------- Performance is measured against baseline implementations: ``` geometric_mean_speedup = geometric_mean(answer_times / baseline_times) raw_score = min(geometric_mean_speedup, 3.0) # Cap at 3x speedup score = (raw_score - 1.0) / 2.0 * 100 # Map 1x-3x to 0-100 ``` - 0 points = No speedup (1x baseline performance) - 50 points = 2x speedup over baseline - 100 points = 3x+ speedup over baseline Evaluation Details (squares variant) ----------------------------------- - Only square shapes with equal M=N=K from 512 to 8192, step 1024: - Shapes: (s, s, s) for s ∈ {512, 1536, 2560, 3584, 4608, 5632, 6656, 7680, 8192} - Correctness verified with tolerance: rtol=1e-2, atol=5e-3 - Performance measured using median execution time - Requires CUDA backend and GPU support
dependencies: uv_project: resources datasets: [] tag: hpc runtime: docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)"
gemm_optimization/transformerish
research
GEMM Optimization Problem ========================= Problem Setting --------------- Design and optimize high-performance Triton kernels for General Matrix-Matrix Multiplication (GEMM) on GPU. This problem focuses on implementing efficient matrix multiplication kernels using Triton's JIT compilation system. The challenge involves optimizing: - **Memory access patterns**: Efficient loading and storing of matrix data - **Block tiling**: Optimal block sizes for GPU execution - **Autotuning**: Leveraging Triton's autotuning capabilities - **Activation functions**: Implementing GELU activation within the kernel - **Performance benchmarking**: Achieving speedup over baseline implementations Target ------ - **Primary**: Maximize geometric mean speedup over baseline (higher is better) - **Secondary**: Ensure correctness across diverse matrix shapes - **Tertiary**: Minimize kernel launch overhead and memory usage API Specification ----------------- Implement a `Solution` class that returns a Triton kernel implementation: ```python class Solution: def solve(self, spec_path: str = None) -> dict: """ Returns a dict with either: - {"code": "python_code_string"} - {"program_path": "path/to/kernel.py"} """ # Your implementation pass ``` Your kernel implementation must provide: ```python import torch import triton import triton.language as tl def matmul(a: torch.Tensor, b: torch.Tensor) -> torch.Tensor: """ Matrix multiplication with GELU activation. Args: a: Input tensor of shape (M, K) b: Input tensor of shape (K, N) Returns: Output tensor of shape (M, N) with GELU activation applied """ pass ``` Required GELU Implementation: ```python @triton.jit def gelu(x): return x * 0.5 * (1.0 + tl.extra.cuda.libdevice.erf(x * 0.7071067811865476)) ``` API Usage Notes --------------- - The evaluator looks for a `matmul` function in the module namespace - Function must handle tensor strides and memory layouts correctly - Must use Triton JIT compilation for kernel definition - Should leverage Triton's autotuning features for optimization - Kernel must apply GELU activation to the result before returning Scoring (0-100) --------------- Performance is measured against baseline implementations: ``` geometric_mean_speedup = geometric_mean(answer_times / baseline_times) raw_score = min(geometric_mean_speedup, 3.0) # Cap at 3x speedup score = (raw_score - 1.0) / 2.0 * 100 # Map 1x-3x to 0-100 ``` - 0 points = No speedup (1x baseline performance) - 50 points = 2x speedup over baseline - 100 points = 3x+ speedup over baseline Evaluation Details (transformer-ish variant) ------------------------------------------- - Transformer-like shapes targeting common attention/FFN dimensions: - (2048, 4096, 4096) - (4096, 4096, 4096) - (8192, 4096, 4096) - (8192, 8192, 4096) - (4096, 11008, 4096) - (4096, 4096, 11008) - Correctness verified with tolerance: rtol=1e-2, atol=5e-3 - Performance measured using median execution time - Requires CUDA backend and GPU support
dependencies: uv_project: resources datasets: [] tag: hpc runtime: docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)"
group_gemm
research
Group GEMM Optimization Problem ================================ Problem Setting --------------- Design and optimize high-performance Triton kernels for Batched Matrix-Matrix Multiplication (BMM) on GPU. This problem focuses on implementing efficient batched matrix multiplication kernels using Triton's JIT compilation system. The challenge involves optimizing: - **Batched operations**: Efficient handling of multiple matrix pairs in a single kernel launch - **Memory access patterns**: Efficient loading and storing of batched matrix data - **Block tiling**: Optimal block sizes for GPU execution across different batch sizes - **Performance benchmarking**: Achieving speedup over baseline PyTorch implementations Target ------ - **Primary**: Maximize geometric mean speedup over baseline (higher is better) - **Secondary**: Ensure correctness across diverse batch sizes and matrix shapes - **Tertiary**: Minimize kernel launch overhead and memory usage API Specification ----------------- Implement a `Solution` class that returns a Triton kernel implementation: ```python class Solution: def solve(self, spec_path: str = None) -> dict: """ Returns a dict with either: - {"code": "python_code_string"} - {"program_path": "path/to/kernel.py"} """ # Your implementation pass ``` Your kernel implementation must provide: ```python import torch import triton import triton.language as tl def bmm(A: torch.Tensor, B: torch.Tensor) -> torch.Tensor: """ Batched matrix multiplication. Args: A: Input tensor of shape (B, M, K) - batch of M×K matrices B: Input tensor of shape (B, K, N) - batch of K×N matrices Returns: Output tensor of shape (B, M, N) - batch of M×N result matrices """ pass ``` API Usage Notes --------------- - The evaluator looks for a `bmm` function in the module namespace - Function must handle tensor strides and memory layouts correctly - Must use Triton JIT compilation for kernel definition - Should leverage Triton's autotuning features for optimization - Kernel must handle variable batch sizes efficiently Common Pitfalls & Implementation Requirements --------------------------------------------- **Triton Autotune Keys:** - Autotune `key` parameter must only include actual kernel parameters (e.g., `["M", "N", "K"]`) - Do NOT include non-kernel parameters in the autotune key (e.g., `dtype_a`, `dtype_b`) - Example (correct): `@triton.autotune(configs=[...], key=["M", "N", "K"])` - Example (incorrect): `@triton.autotune(configs=[...], key=["M", "N", "K", "dtype_a"])` **Dtype Casting:** - Use `tl.float16` directly for output dtype casting: `acc.to(tl.float16)` (correct) - Do NOT use `tl.dtype_elementwise(C_ptr)` - this function doesn't exist in Triton 3.4.0 (incorrect) - The problem requires float16 output, so always cast accumulator to `tl.float16` **Kernel Parameters:** - Only pass actual kernel parameters as arguments to the kernel - Do NOT pass Python objects (like `dtype`) as keyword arguments unless they're defined as kernel parameters - Example (correct): `_bmm_kernel[grid](A, B, C, Batches, M, N, K, ..., BLOCK_M, BLOCK_N, BLOCK_K)` **Correctness Requirements:** - All tests must pass (correctness check) for any score > 0 - Output dtype must be float16 (match baseline behavior) - Output shape must be (B, M, N) where B is batch size **Kernel Implementation Pattern:** - Initialize pointers inside the K-loop for each iteration (computes pointers per K-slice) - Use proper boundary masking: `k_mask = (k + offs_k) < K` or `k_idxs = k0 + offs_k` with `k_idxs < K` - Load data and convert to float32 BEFORE accumulation: `a = tl.load(A_ptrs, mask=a_mask, other=0.0).to(tl.float32)` - Accumulate in float32: `acc += tl.dot(a, b)` where `acc` is `dtype=tl.float32` - Example K-loop pattern: ```python k0 = 0 while k0 < K: k_idxs = k0 + offs_k A_ptrs = A_batch_ptr + (offs_m[:, None] * stride_am) + (k_idxs[None, :] * stride_ak) B_ptrs = B_batch_ptr + (k_idxs[:, None] * stride_bk) + (offs_n[None, :] * stride_bn) a_mask = (offs_m[:, None] < M) & (k_idxs[None, :] < K) b_mask = (offs_n[None, :] < N) & (k_idxs[:, None] < K) a = tl.load(A_ptrs, mask=a_mask, other=0.0).to(tl.float32) b = tl.load(B_ptrs, mask=b_mask, other=0.0).to(tl.float32) acc += tl.dot(a, b) k0 += BLOCK_K ``` **Solution.solve() Method:** - Read file directly using `Path(__file__).read_text()` (correct) - Do NOT use `inspect.getsource(sys.modules[__name__])` - fails when module is dynamically loaded (incorrect) - Example (correct): ```python def solve(self, spec_path: Optional[str] = None) -> Dict[str, str]: from pathlib import Path current_file = Path(__file__).resolve() return {"code": current_file.read_text(encoding="utf-8")} ``` **Performance Optimization Tips:** - Use FP32 accumulator for numerical stability: `acc = tl.zeros((BLOCK_M, BLOCK_N), dtype=tl.float32)` - Load data as float32: `a = tl.load(...).to(tl.float32)` - critical for correctness - Cast to float16 only at the end: `tl.store(c_ptrs, acc.to(tl.float16), mask=c_mask)` - Consider using autotune to find optimal block sizes and warp configurations - Test with warmup phase to stabilize GPU clocks before benchmarking Scoring (0-100) --------------- Performance is measured against baseline implementations: ``` geometric_mean_speedup = geometric_mean(baseline_times / answer_times) raw_score = min(geometric_mean_speedup, 5.0) # Cap at 5x speedup score = (raw_score - 1.0) / 4.0 * 100 # Map 1x-5x to 0-100 ``` - 0 points = No speedup (1x baseline performance) - 25 points = 2x speedup over baseline - 50 points = 3x speedup over baseline - 75 points = 4x speedup over baseline - 100 points = 5x+ speedup over baseline Evaluation Details ------------------ - Tested on multiple batch sizes: B ∈ {64, 256, 1024} (default) - Fixed matrix dimensions: M=64, N=64, K=64 (configurable via metadata) - Can also test custom shapes specified in metadata - Correctness verified with tolerance: rtol=1e-2, atol=5e-3 - Performance measured using median execution time - Requires CUDA backend and GPU support - All tests must pass for any score > 0
dependencies: uv_project: resources tag: hpc runtime: environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)" docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true
imagenet_pareto/1m
research
ImageNet Pareto Optimization - 1M Parameter Variant =================================================== Problem Setting --------------- Train a neural network on a synthetic ImageNet-like dataset to maximize accuracy while staying within a parameter budget of 1,000,000 parameters. Objective: Achieve the highest possible accuracy without exceeding the parameter constraint. Target ------ **Primary**: Maximize test accuracy **Secondary**: Maintain model efficiency (stay under parameter budget) API Specification ---------------- Implement a `Solution` class: ```python import torch import torch.nn as nn class Solution: def solve(self, train_loader, val_loader, metadata: dict = None) -> torch.nn.Module: """ Train a model and return it. Args: train_loader: PyTorch DataLoader with training data val_loader: PyTorch DataLoader with validation data metadata: Dict with keys: - num_classes: int (128) - input_dim: int (384) - param_limit: int (1,000,000) - baseline_accuracy: float (0.8) - train_samples: int - val_samples: int - test_samples: int - device: str ("cpu") Returns: Trained torch.nn.Module ready for evaluation """ # Your implementation pass ``` **Implementation Requirements**: - Use `metadata["input_dim"]` and `metadata["num_classes"]` for model architecture - Keep model parameters <= 1,000,000 (hard constraint - models exceeding this receive 0 score) - Return a trained model ready for evaluation - Ensure model works with the provided device Parameter Constraint -------------------- **HARD LIMIT: 1,000,000 trainable parameters** - This is an absolute constraint enforced during evaluation - Models exceeding 1,000,000 parameters will receive a score of 0.0 - The constraint cannot be waived under any circumstances - You must design your architecture carefully to stay under this limit Example: A model with 1,000,001 parameters → Score 0.0 (constraint violated) Example: A model with 1,000,000 parameters → Score based on accuracy Baseline Accuracy ----------------- **Baseline Accuracy for this variant: 80%** - This is the expected performance level for a simple model at this parameter budget - Solutions must achieve accuracy **above** this baseline to receive a positive score - Accuracy **below** baseline results in 0 points - Accuracy improvements are scored linearly Scoring Formula --------------- The scoring is based purely on **linear accuracy scaling** from baseline to 100%: ``` If model exceeds parameter limit (1,000,000): Score = 0.0 (constraint violation) Else: Score = (accuracy - 0.8) / (1.0 - 0.8) × 100.0 Where: - accuracy = achieved test accuracy (0.0 to 1.0) - 0.8 = baseline accuracy for this variant - 1.0 = target (100% accuracy = 100 points) Score is clamped to [0, 100] range ``` **Linearly Scaled Scoring for 1M variant:** | Accuracy | Score | Notes | |----------|-------|-------| | 80.0% | 0 | At baseline (0 points) | | 85.0% | ~25 | 5% above baseline | | 90.0% | ~50 | 10% above baseline | | 95.0% | ~75 | 15% above baseline | | 100% | 100 | Perfect accuracy (max score) | Evaluation Process ------------------ The evaluator follows these steps: ### 1. Build Synthetic Dataset ```python # Generate synthetic ImageNet-like data train_loader, val_loader, test_loader = make_dataloaders() # Each sample: (384,) feature vector, label in [0, 127] ``` ### 2. Call Solution ```python from solution import Solution solution = Solution() model = solution.solve(train_loader, val_loader, metadata) # metadata contains: num_classes, input_dim, param_limit, baseline_accuracy, device ``` ### 3. Validate Model ```python param_count = sum(p.numel() for p in model.parameters() if p.requires_grad) if param_count > 1000000: score = 0.0 # Constraint violation ``` ### 4. Evaluate Accuracy ```python model.eval() correct = 0 total = 0 for inputs, targets in test_loader: outputs = model(inputs) preds = outputs.argmax(dim=1) correct += (preds == targets).sum().item() total += targets.numel() accuracy = correct / total ``` ### 5. Calculate Score ```python score = (accuracy - 0.8) / (1.0 - 0.8) * 100.0 score = max(0.0, min(100.0, score)) ``` Evaluation Details ------------------ - 128 classes, 384-dimensional feature vectors - Training: 2,048 samples (16 per class) - Validation: 512 samples (4 per class) - Test: 1,024 samples (8 per class) - Data generated synthetically with controlled noise Environment Details ------------------- - **Device**: CPU only (`device="cpu"`) - **Python Environment**: - Python 3 - PyTorch 2.2-2.4 - NumPy ≥1.24 - tqdm ≥4.64 - **Timeout**: 1 hour (3600 seconds) for entire evaluation Key Points ---------- 1. **Parameter Constraint is Hard**: Models exceeding 1,000,000 parameters always score 0 2. **Baseline is Lower Bound**: Must achieve 80%+ accuracy to score points 3. **Linear Scoring**: Every accuracy improvement scales linearly to the score 4. **100% is Target**: Achieving 100% accuracy gives full 100 points 5. **Accuracy is Primary**: Focus on accuracy within the parameter budget Example: Simple Baseline ------------------------- ```python import torch import torch.nn as nn class Solution: def solve(self, train_loader, val_loader, metadata: dict = None): # Simple 2-layer MLP input_dim = metadata["input_dim"] # 384 num_classes = metadata["num_classes"] # 128 hidden_dim = 512 model = nn.Sequential( nn.Linear(input_dim, hidden_dim), nn.ReLU(), nn.Linear(hidden_dim, num_classes) ) # Parameter count: 384*512 + 512 + 512*128 + 128 = ~262,784 # Simple training loop optimizer = torch.optim.Adam(model.parameters(), lr=0.001) criterion = nn.CrossEntropyLoss() for epoch in range(50): model.train() for inputs, targets in train_loader: optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, targets) loss.backward() optimizer.step() return model ``` **Note**: This baseline achieves ~80% accuracy with ~263K parameters. To reach higher accuracy within the 1M budget, consider deeper networks, residual connections, or better optimization. Implementation Tips ------------------- - Monitor parameter count: `sum(p.numel() for p in model.parameters() if p.requires_grad)` - Gradually improve architecture while staying under budget - Use techniques like batch normalization, dropout, or residual connections - Higher capacity (more parameters) generally improves accuracy up to the limit Baseline Performance -------------------- - **Baseline Accuracy**: 80% - **Baseline Parameters**: Approximately 1,000,000 - This represents a simple model at this parameter budget
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "runtime": { "timeout_seconds": 3600 }, "tag": "ai" }
imagenet_pareto/200k
research
ImageNet Pareto Optimization - 200K Parameter Variant ===================================================== Problem Setting --------------- Train a neural network on a synthetic ImageNet-like dataset to maximize accuracy while staying within a parameter budget of 200,000 parameters. Objective: Achieve the highest possible accuracy without exceeding the parameter constraint. Target ------ **Primary**: Maximize test accuracy **Secondary**: Maintain model efficiency (stay under parameter budget) API Specification ---------------- Implement a `Solution` class: ```python import torch import torch.nn as nn class Solution: def solve(self, train_loader, val_loader, metadata: dict = None) -> torch.nn.Module: """ Train a model and return it. Args: train_loader: PyTorch DataLoader with training data val_loader: PyTorch DataLoader with validation data metadata: Dict with keys: - num_classes: int (128) - input_dim: int (384) - param_limit: int (200,000) - baseline_accuracy: float (0.65) - train_samples: int - val_samples: int - test_samples: int - device: str ("cpu") Returns: Trained torch.nn.Module ready for evaluation """ # Your implementation pass ``` **Implementation Requirements**: - Use `metadata["input_dim"]` and `metadata["num_classes"]` for model architecture - Keep model parameters <= 200,000 (hard constraint - models exceeding this receive 0 score) - Return a trained model ready for evaluation - Ensure model works with the provided device Parameter Constraint -------------------- **HARD LIMIT: 200,000 trainable parameters** - This is an absolute constraint enforced during evaluation - Models exceeding 200,000 parameters will receive a score of 0.0 - The constraint cannot be waived under any circumstances - You must design your architecture carefully to stay under this limit Example: A model with 200,001 parameters → Score 0.0 (constraint violated) Example: A model with 200,000 parameters → Score based on accuracy Baseline Accuracy ----------------- **Baseline Accuracy for this variant: 65%** - This is the expected performance level for a simple model at this parameter budget - Solutions must achieve accuracy **above** this baseline to receive a positive score - Accuracy **below** baseline results in 0 points - Accuracy improvements are scored linearly Scoring Formula --------------- The scoring is based purely on **linear accuracy scaling** from baseline to 100%: ``` If model exceeds parameter limit (200,000): Score = 0.0 (constraint violation) Else: Score = (accuracy - 0.65) / (1.0 - 0.65) × 100.0 Where: - accuracy = achieved test accuracy (0.0 to 1.0) - 0.65 = baseline accuracy for this variant - 1.0 = target (100% accuracy = 100 points) Score is clamped to [0, 100] range ``` **Linearly Scaled Scoring for 200K variant:** | Accuracy | Score | Notes | |----------|-------|-------| | 65.0% | 0 | At baseline (0 points) | | 70.0% | ~14 | 5% above baseline | | 75.0% | ~28 | 10% above baseline | | 80.0% | ~42 | 15% above baseline | | 100% | 100 | Perfect accuracy (max score) | Evaluation Process ------------------ The evaluator follows these steps: ### 1. Build Synthetic Dataset ```python # Generate synthetic ImageNet-like data train_loader, val_loader, test_loader = make_dataloaders() # Each sample: (384,) feature vector, label in [0, 127] ``` ### 2. Call Solution ```python from solution import Solution solution = Solution() model = solution.solve(train_loader, val_loader, metadata) # metadata contains: num_classes, input_dim, param_limit, baseline_accuracy, device ``` ### 3. Validate Model ```python param_count = sum(p.numel() for p in model.parameters() if p.requires_grad) if param_count > 200000: score = 0.0 # Constraint violation ``` ### 4. Evaluate Accuracy ```python model.eval() correct = 0 total = 0 for inputs, targets in test_loader: outputs = model(inputs) preds = outputs.argmax(dim=1) correct += (preds == targets).sum().item() total += targets.numel() accuracy = correct / total ``` ### 5. Calculate Score ```python score = (accuracy - 0.65) / (1.0 - 0.65) * 100.0 score = max(0.0, min(100.0, score)) ``` Evaluation Details ------------------ - 128 classes, 384-dimensional feature vectors - Training: 2,048 samples (16 per class) - Validation: 512 samples (4 per class) - Test: 1,024 samples (8 per class) - Data generated synthetically with controlled noise Environment Details ------------------- - **Device**: CPU only (`device="cpu"`) - **Python Environment**: - Python 3 - PyTorch 2.2-2.4 - NumPy ≥1.24 - tqdm ≥4.64 - **Timeout**: 1 hour (3600 seconds) for entire evaluation Key Points ---------- 1. **Parameter Constraint is Hard**: Models exceeding 200,000 parameters always score 0 2. **Baseline is Lower Bound**: Must achieve 65%+ accuracy to score points 3. **Linear Scoring**: Every accuracy improvement scales linearly to the score 4. **100% is Target**: Achieving 100% accuracy gives full 100 points 5. **Accuracy is Primary**: Focus on accuracy within the parameter budget Example: Simple Baseline ------------------------- ```python import torch import torch.nn as nn class Solution: def solve(self, train_loader, val_loader, metadata: dict = None): # Simple 2-layer MLP input_dim = metadata["input_dim"] # 384 num_classes = metadata["num_classes"] # 128 hidden_dim = 256 model = nn.Sequential( nn.Linear(input_dim, hidden_dim), nn.ReLU(), nn.Linear(hidden_dim, num_classes) ) # Parameter count: 384*256 + 256 + 256*128 + 128 = ~131,456 # Simple training loop optimizer = torch.optim.Adam(model.parameters(), lr=0.001) criterion = nn.CrossEntropyLoss() for epoch in range(50): model.train() for inputs, targets in train_loader: optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, targets) loss.backward() optimizer.step() return model ``` **Note**: This baseline achieves ~65% accuracy with ~131K parameters. To reach higher accuracy within the 200K budget, consider deeper networks or better optimization. Implementation Tips ------------------- - Monitor parameter count: `sum(p.numel() for p in model.parameters() if p.requires_grad)` - Gradually improve architecture while staying under budget - Use techniques like batch normalization, dropout, or residual connections - Higher capacity (more parameters) generally improves accuracy up to the limit Baseline Performance -------------------- - **Baseline Accuracy**: 65% - **Baseline Parameters**: Approximately 200,000 - This represents a simple model at this parameter budget
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "runtime": { "timeout_seconds": 3600 }, "tag": "ai" }
imagenet_pareto/2_5m
research
ImageNet Pareto Optimization - 2.5M Parameter Variant ===================================================== Problem Setting --------------- Train a neural network on a synthetic ImageNet-like dataset to maximize accuracy while staying within a parameter budget of 2,500,000 parameters. Objective: Achieve the highest possible accuracy without exceeding the parameter constraint. Target ------ **Primary**: Maximize test accuracy **Secondary**: Maintain model efficiency (stay under parameter budget) API Specification ---------------- Implement a `Solution` class: ```python import torch import torch.nn as nn class Solution: def solve(self, train_loader, val_loader, metadata: dict = None) -> torch.nn.Module: """ Train a model and return it. Args: train_loader: PyTorch DataLoader with training data val_loader: PyTorch DataLoader with validation data metadata: Dict with keys: - num_classes: int (128) - input_dim: int (384) - param_limit: int (2,500,000) - baseline_accuracy: float (0.85) - train_samples: int - val_samples: int - test_samples: int - device: str ("cpu") Returns: Trained torch.nn.Module ready for evaluation """ # Your implementation pass ``` **Implementation Requirements**: - Use `metadata["input_dim"]` and `metadata["num_classes"]` for model architecture - Keep model parameters <= 2,500,000 (hard constraint - models exceeding this receive 0 score) - Return a trained model ready for evaluation - Ensure model works with the provided device Parameter Constraint -------------------- **HARD LIMIT: 2,500,000 trainable parameters** - This is an absolute constraint enforced during evaluation - Models exceeding 2,500,000 parameters will receive a score of 0.0 - The constraint cannot be waived under any circumstances - You must design your architecture carefully to stay under this limit Example: A model with 2,500,001 parameters → Score 0.0 (constraint violated) Example: A model with 2,500,000 parameters → Score based on accuracy Baseline Accuracy ----------------- **Baseline Accuracy for this variant: 85%** - This is the expected performance level for a simple model at this parameter budget - Solutions must achieve accuracy **above** this baseline to receive a positive score - Accuracy **below** baseline results in 0 points - Accuracy improvements are scored linearly Scoring Formula --------------- The scoring is based purely on **linear accuracy scaling** from baseline to 100%: ``` If model exceeds parameter limit (2,500,000): Score = 0.0 (constraint violation) Else: Score = (accuracy - 0.85) / (1.0 - 0.85) × 100.0 Where: - accuracy = achieved test accuracy (0.0 to 1.0) - 0.85 = baseline accuracy for this variant - 1.0 = target (100% accuracy = 100 points) Score is clamped to [0, 100] range ``` **Linearly Scaled Scoring for 2.5M variant:** | Accuracy | Score | Notes | |----------|-------|-------| | 85.0% | 0 | At baseline (0 points) | | 90.0% | ~33 | 5% above baseline | | 95.0% | ~66 | 10% above baseline | | 100.0% | ~100 | 15% above baseline | | 100% | 100 | Perfect accuracy (max score) | Evaluation Process ------------------ The evaluator follows these steps: ### 1. Build Synthetic Dataset ```python # Generate synthetic ImageNet-like data train_loader, val_loader, test_loader = make_dataloaders() # Each sample: (384,) feature vector, label in [0, 127] ``` ### 2. Call Solution ```python from solution import Solution solution = Solution() model = solution.solve(train_loader, val_loader, metadata) # metadata contains: num_classes, input_dim, param_limit, baseline_accuracy, device ``` ### 3. Validate Model ```python param_count = sum(p.numel() for p in model.parameters() if p.requires_grad) if param_count > 2500000: score = 0.0 # Constraint violation ``` ### 4. Evaluate Accuracy ```python model.eval() correct = 0 total = 0 for inputs, targets in test_loader: outputs = model(inputs) preds = outputs.argmax(dim=1) correct += (preds == targets).sum().item() total += targets.numel() accuracy = correct / total ``` ### 5. Calculate Score ```python score = (accuracy - 0.85) / (1.0 - 0.85) * 100.0 score = max(0.0, min(100.0, score)) ``` Evaluation Details ------------------ - 128 classes, 384-dimensional feature vectors - Training: 2,048 samples (16 per class) - Validation: 512 samples (4 per class) - Test: 1,024 samples (8 per class) - Data generated synthetically with controlled noise Environment Details ------------------- - **Device**: CPU only (`device="cpu"`) - **Python Environment**: - Python 3 - PyTorch 2.2-2.4 - NumPy ≥1.24 - tqdm ≥4.64 - **Timeout**: 1 hour (3600 seconds) for entire evaluation Key Points ---------- 1. **Parameter Constraint is Hard**: Models exceeding 2,500,000 parameters always score 0 2. **Baseline is Lower Bound**: Must achieve 85%+ accuracy to score points 3. **Linear Scoring**: Every accuracy improvement scales linearly to the score 4. **100% is Target**: Achieving 100% accuracy gives full 100 points 5. **Accuracy is Primary**: Focus on accuracy within the parameter budget Example: Simple Baseline ------------------------- ```python import torch import torch.nn as nn class Solution: def solve(self, train_loader, val_loader, metadata: dict = None): # Simple 3-layer MLP input_dim = metadata["input_dim"] # 384 num_classes = metadata["num_classes"] # 128 hidden_dim = 1024 model = nn.Sequential( nn.Linear(input_dim, hidden_dim), nn.ReLU(), nn.Linear(hidden_dim, hidden_dim), nn.ReLU(), nn.Linear(hidden_dim, num_classes) ) # Parameter count: 384*1024 + 1024 + 1024*1024 + 1024 + 1024*128 + 128 = ~1,577,728 # Simple training loop optimizer = torch.optim.Adam(model.parameters(), lr=0.001) criterion = nn.CrossEntropyLoss() for epoch in range(50): model.train() for inputs, targets in train_loader: optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, targets) loss.backward() optimizer.step() return model ``` **Note**: This baseline achieves ~85% accuracy with ~1.58M parameters. To reach higher accuracy within the 2.5M budget, consider deeper networks or better optimization. Implementation Tips ------------------- - Monitor parameter count: `sum(p.numel() for p in model.parameters() if p.requires_grad)` - Gradually improve architecture while staying under budget - Use techniques like batch normalization, dropout, or residual connections - Higher capacity (more parameters) generally improves accuracy up to the limit Baseline Performance -------------------- - **Baseline Accuracy**: 85% - **Baseline Parameters**: Approximately 2,500,000 - This represents a simple model at this parameter budget
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "runtime": { "timeout_seconds": 3600 }, "tag": "ai" }
imagenet_pareto/500k
research
ImageNet Pareto Optimization - 500K Parameter Variant ===================================================== Problem Setting --------------- Train a neural network on a synthetic ImageNet-like dataset to maximize accuracy while staying within a parameter budget of 500,000 parameters. Objective: Achieve the highest possible accuracy without exceeding the parameter constraint. Target ------ **Primary**: Maximize test accuracy **Secondary**: Maintain model efficiency (stay under parameter budget) API Specification ---------------- Implement a `Solution` class: ```python import torch import torch.nn as nn class Solution: def solve(self, train_loader, val_loader, metadata: dict = None) -> torch.nn.Module: """ Train a model and return it. Args: train_loader: PyTorch DataLoader with training data val_loader: PyTorch DataLoader with validation data metadata: Dict with keys: - num_classes: int (128) - input_dim: int (384) - param_limit: int (500,000) - baseline_accuracy: float (0.72) - train_samples: int - val_samples: int - test_samples: int - device: str ("cpu") Returns: Trained torch.nn.Module ready for evaluation """ # Your implementation pass ``` **Implementation Requirements**: - Use `metadata["input_dim"]` and `metadata["num_classes"]` for model architecture - Keep model parameters <= 500,000 (hard constraint - models exceeding this receive 0 score) - Return a trained model ready for evaluation - Ensure model works with the provided device Parameter Constraint -------------------- **HARD LIMIT: 500,000 trainable parameters** - This is an absolute constraint enforced during evaluation - Models exceeding 500,000 parameters will receive a score of 0.0 - The constraint cannot be waived under any circumstances - You must design your architecture carefully to stay under this limit Example: A model with 500,001 parameters → Score 0.0 (constraint violated) Example: A model with 500,000 parameters → Score based on accuracy Baseline Accuracy ----------------- **Baseline Accuracy for this variant: 72%** - This is the expected performance level for a simple model at this parameter budget - Solutions must achieve accuracy **above** this baseline to receive a positive score - Accuracy **below** baseline results in 0 points - Accuracy improvements are scored linearly Scoring Formula --------------- The scoring is based purely on **linear accuracy scaling** from baseline to 100%: ``` If model exceeds parameter limit (500,000): Score = 0.0 (constraint violation) Else: Score = (accuracy - 0.72) / (1.0 - 0.72) × 100.0 Where: - accuracy = achieved test accuracy (0.0 to 1.0) - 0.72 = baseline accuracy for this variant - 1.0 = target (100% accuracy = 100 points) Score is clamped to [0, 100] range ``` **Linearly Scaled Scoring for 500K variant:** | Accuracy | Score | Notes | |----------|-------|-------| | 72.0% | 0 | At baseline (0 points) | | 77.0% | ~17 | 5% above baseline | | 82.0% | ~35 | 10% above baseline | | 87.0% | ~53 | 15% above baseline | | 100% | 100 | Perfect accuracy (max score) | Evaluation Process ------------------ The evaluator follows these steps: ### 1. Build Synthetic Dataset ```python # Generate synthetic ImageNet-like data train_loader, val_loader, test_loader = make_dataloaders() # Each sample: (384,) feature vector, label in [0, 127] ``` ### 2. Call Solution ```python from solution import Solution solution = Solution() model = solution.solve(train_loader, val_loader, metadata) # metadata contains: num_classes, input_dim, param_limit, baseline_accuracy, device ``` ### 3. Validate Model ```python param_count = sum(p.numel() for p in model.parameters() if p.requires_grad) if param_count > 500000: score = 0.0 # Constraint violation ``` ### 4. Evaluate Accuracy ```python model.eval() correct = 0 total = 0 for inputs, targets in test_loader: outputs = model(inputs) preds = outputs.argmax(dim=1) correct += (preds == targets).sum().item() total += targets.numel() accuracy = correct / total ``` ### 5. Calculate Score ```python score = (accuracy - 0.72) / (1.0 - 0.72) * 100.0 score = max(0.0, min(100.0, score)) ``` Evaluation Details ------------------ - 128 classes, 384-dimensional feature vectors - Training: 2,048 samples (16 per class) - Validation: 512 samples (4 per class) - Test: 1,024 samples (8 per class) - Data generated synthetically with controlled noise Environment Details ------------------- - **Device**: CPU only (`device="cpu"`) - **Python Environment**: - Python 3 - PyTorch 2.2-2.4 - NumPy ≥1.24 - tqdm ≥4.64 - **Timeout**: 1 hour (3600 seconds) for entire evaluation Key Points ---------- 1. **Parameter Constraint is Hard**: Models exceeding 500,000 parameters always score 0 2. **Baseline is Lower Bound**: Must achieve 72%+ accuracy to score points 3. **Linear Scoring**: Every accuracy improvement scales linearly to the score 4. **100% is Target**: Achieving 100% accuracy gives full 100 points 5. **Accuracy is Primary**: Focus on accuracy within the parameter budget Example: Simple Baseline ------------------------- ```python import torch import torch.nn as nn class Solution: def solve(self, train_loader, val_loader, metadata: dict = None): # Simple 2-layer MLP input_dim = metadata["input_dim"] # 384 num_classes = metadata["num_classes"] # 128 hidden_dim = 384 model = nn.Sequential( nn.Linear(input_dim, hidden_dim), nn.ReLU(), nn.Linear(hidden_dim, num_classes) ) # Parameter count: 384*384 + 384 + 384*128 + 128 = ~196,992 # Simple training loop optimizer = torch.optim.Adam(model.parameters(), lr=0.001) criterion = nn.CrossEntropyLoss() for epoch in range(50): model.train() for inputs, targets in train_loader: optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, targets) loss.backward() optimizer.step() return model ``` **Note**: This baseline achieves ~72% accuracy with ~197K parameters. To reach higher accuracy within the 500K budget, consider deeper networks or better optimization. Implementation Tips ------------------- - Monitor parameter count: `sum(p.numel() for p in model.parameters() if p.requires_grad)` - Gradually improve architecture while staying under budget - Use techniques like batch normalization, dropout, or residual connections - Higher capacity (more parameters) generally improves accuracy up to the limit Baseline Performance -------------------- - **Baseline Accuracy**: 72% - **Baseline Parameters**: Approximately 500,000 - This represents a simple model at this parameter budget
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "runtime": { "timeout_seconds": 3600 }, "tag": "ai" }
imagenet_pareto/5m
research
ImageNet Pareto Optimization - 5M Parameter Variant =================================================== Problem Setting --------------- Train a neural network on a synthetic ImageNet-like dataset to maximize accuracy while staying within a parameter budget of 5,000,000 parameters. Objective: Achieve the highest possible accuracy without exceeding the parameter constraint. Target ------ **Primary**: Maximize test accuracy **Secondary**: Maintain model efficiency (stay under parameter budget) API Specification ---------------- Implement a `Solution` class: ```python import torch import torch.nn as nn class Solution: def solve(self, train_loader, val_loader, metadata: dict = None) -> torch.nn.Module: """ Train a model and return it. Args: train_loader: PyTorch DataLoader with training data val_loader: PyTorch DataLoader with validation data metadata: Dict with keys: - num_classes: int (128) - input_dim: int (384) - param_limit: int (5,000,000) - baseline_accuracy: float (0.88) - train_samples: int - val_samples: int - test_samples: int - device: str ("cpu") Returns: Trained torch.nn.Module ready for evaluation """ # Your implementation pass ``` **Implementation Requirements**: - Use `metadata["input_dim"]` and `metadata["num_classes"]` for model architecture - Keep model parameters <= 5,000,000 (hard constraint - models exceeding this receive 0 score) - Return a trained model ready for evaluation - Ensure model works with the provided device Parameter Constraint -------------------- **HARD LIMIT: 5,000,000 trainable parameters** - This is an absolute constraint enforced during evaluation - Models exceeding 5,000,000 parameters will receive a score of 0.0 - The constraint cannot be waived under any circumstances - You must design your architecture carefully to stay under this limit Example: A model with 5,000,001 parameters → Score 0.0 (constraint violated) Example: A model with 5,000,000 parameters → Score based on accuracy Baseline Accuracy ----------------- **Baseline Accuracy for this variant: 88%** - This is the expected performance level for a simple model at this parameter budget - Solutions must achieve accuracy **above** this baseline to receive a positive score - Accuracy **below** baseline results in 0 points - Accuracy improvements are scored linearly Scoring Formula --------------- The scoring is based purely on **linear accuracy scaling** from baseline to 100%: ``` If model exceeds parameter limit (5,000,000): Score = 0.0 (constraint violation) Else: Score = (accuracy - 0.88) / (1.0 - 0.88) × 100.0 Where: - accuracy = achieved test accuracy (0.0 to 1.0) - 0.88 = baseline accuracy for this variant - 1.0 = target (100% accuracy = 100 points) Score is clamped to [0, 100] range ``` **Linearly Scaled Scoring for 5M variant:** | Accuracy | Score | Notes | |----------|-------|-------| | 88.0% | 0 | At baseline (0 points) | | 93.0% | ~41 | 5% above baseline | | 98.0% | ~83 | 10% above baseline | | 103.0% | ~125 | 15% above baseline | | 100% | 100 | Perfect accuracy (max score) | Evaluation Process ------------------ The evaluator follows these steps: ### 1. Build Synthetic Dataset ```python # Generate synthetic ImageNet-like data train_loader, val_loader, test_loader = make_dataloaders() # Each sample: (384,) feature vector, label in [0, 127] ``` ### 2. Call Solution ```python from solution import Solution solution = Solution() model = solution.solve(train_loader, val_loader, metadata) # metadata contains: num_classes, input_dim, param_limit, baseline_accuracy, device ``` ### 3. Validate Model ```python param_count = sum(p.numel() for p in model.parameters() if p.requires_grad) if param_count > 5000000: score = 0.0 # Constraint violation ``` ### 4. Evaluate Accuracy ```python model.eval() correct = 0 total = 0 for inputs, targets in test_loader: outputs = model(inputs) preds = outputs.argmax(dim=1) correct += (preds == targets).sum().item() total += targets.numel() accuracy = correct / total ``` ### 5. Calculate Score ```python score = (accuracy - 0.88) / (1.0 - 0.88) * 100.0 score = max(0.0, min(100.0, score)) ``` Evaluation Details ------------------ - 128 classes, 384-dimensional feature vectors - Training: 2,048 samples (16 per class) - Validation: 512 samples (4 per class) - Test: 1,024 samples (8 per class) - Data generated synthetically with controlled noise Environment Details ------------------- - **Device**: CPU only (`device="cpu"`) - **Python Environment**: - Python 3 - PyTorch 2.2-2.4 - NumPy ≥1.24 - tqdm ≥4.64 - **Timeout**: 1 hour (3600 seconds) for entire evaluation Key Points ---------- 1. **Parameter Constraint is Hard**: Models exceeding 5,000,000 parameters always score 0 2. **Baseline is Lower Bound**: Must achieve 88%+ accuracy to score points 3. **Linear Scoring**: Every accuracy improvement scales linearly to the score 4. **100% is Target**: Achieving 100% accuracy gives full 100 points 5. **Accuracy is Primary**: Focus on accuracy within the parameter budget Example: Simple Baseline ------------------------- ```python import torch import torch.nn as nn class Solution: def solve(self, train_loader, val_loader, metadata: dict = None): # Simple 4-layer MLP input_dim = metadata["input_dim"] # 384 num_classes = metadata["num_classes"] # 128 hidden_dim = 1536 model = nn.Sequential( nn.Linear(input_dim, hidden_dim), nn.ReLU(), nn.Linear(hidden_dim, hidden_dim), nn.ReLU(), nn.Linear(hidden_dim, hidden_dim), nn.ReLU(), nn.Linear(hidden_dim, num_classes) ) # Parameter count: ~4.9M # Simple training loop optimizer = torch.optim.Adam(model.parameters(), lr=0.001) criterion = nn.CrossEntropyLoss() for epoch in range(50): model.train() for inputs, targets in train_loader: optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, targets) loss.backward() optimizer.step() return model ``` **Note**: This baseline achieves ~88% accuracy with ~4.9M parameters. To reach higher accuracy within the 5M budget, consider deeper networks or better optimization. Implementation Tips ------------------- - Monitor parameter count: `sum(p.numel() for p in model.parameters() if p.requires_grad)` - Gradually improve architecture while staying under budget - Use techniques like batch normalization, dropout, or residual connections - Higher capacity (more parameters) generally improves accuracy up to the limit Baseline Performance -------------------- - **Baseline Accuracy**: 88% - **Baseline Parameters**: Approximately 5,000,000 - This represents a simple model at this parameter budget
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "runtime": { "timeout_seconds": 3600 }, "tag": "ai" }
llm_router
research
LLM Router ================================ Overview -------- This benchmark evaluates a language model's ability to implement an LLM routing policy. Given a user query, the router must choose one model from a small candidate set with different cost–quality tradeoffs. The goal is to maximize accuracy while minimizing inference cost. The task is fully offline: model correctness and costs are precomputed. The router must generalize from query text alone. Problem Setting -------- You operate a router that sits in front of a pool of large language models (LLMs). For each incoming query q, the router must select exactly one model from a fixed candidate set: ["cheap", "mid", "expensive"]. These are abstract routing tiers. Each tier corresponds to a concrete LLM with a known cost and accuracy profile, but this mapping is not visible to the router. Intuitively: - cheap: fast and inexpensive, but less reliable - mid: moderate cost and accuracy - expensive: highest accuracy, highest cost No single model is optimal for all queries. You have access to a reference dataset of queries, each labeled with which concrete LLMs produced correct answers and their costs. During evaluation, the router must generalize to unseen queries, selecting the best model from the candidate set based on the query text alone. You are allowed to develop heuristics or machine learning models to implement the routing policy. However, the solution must be stateless: each query is handled independently without memory of previous queries. Target -------- The goal is to achieve high accuracy while minimizing average inference cost. API Specification -------- Implement a `Solution` class: ```python class Solution: def solve(self, query: str, eval_name: str, candidate_models: list[str] ) -> str: """ Select exactly one routing option for the given query. Args: query: The user query. eval_name: The dataset or task name (e.g., "mbpp"). candidate_models: A list of available routing options (["cheap", "mid", "expensive"] by default). Returns: A single string from candidate_models indicating the chosen model. """ ``` **Constraints**: - The return value must be an element of candidate_models. - The method is called once per query. - The solution must be stateless across queries. - External API calls and internet access are not allowed. Returning an invalid value results in a score of 0 for that query. Dataset -------- You will be provided with a dataset of queries, each associated with multiple concrete LLMs, whether they generate correct answers, and costs. During evaluation, there will be a separate evaluation dataset. For each query in this dataset, the router receives only: - query - eval_name - candidate_models One example mapping of routing tiers to concrete LLMs is: - "cheap": "mistralai/mistral-7b-chat", - "mid": "mistralai/mixtral-8x7b-chat", - "expensive": "gpt-4-1106-preview". Scoring (0-100) -------- The router is evaluated on a fixed set of queries. For each query: - The evaluator calls Solution.solve(...). - The chosen model's correctness and cost are looked up. - Accuracy and cost are accumulated. Let: - accuracy = fraction of queries answered correctly - avg_cost = average inference cost per query The raw score is computed as: raw_score = accuracy − λ × avg_cost, where λ = 150.0. Naively guessing "cheap"/"mid"/"expensive" all the time is expected to yield a uniformly low score. The final benchmark score is normalized to the range [0, 100], where the oracle router always gets 100. Reference Dataset -------- The reference dataset is provided as a CSV file that your solution can read at runtime: ```python import pandas as pd import os # Get the directory where this solution file is located # The resources/ folder is in the problem directory problem_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) data_path = os.path.join(problem_dir, "resources", "reference_data.csv") # Or simply use relative path (current working directory is the problem directory): data_path = "resources/reference_data.csv" df = pd.read_csv(data_path) ``` **Columns:** - `sample_id`: Unique identifier (e.g., "mmlu-sociology.val.78") - `prompt`: The query text (may contain newlines, escaped as \n) - `eval_name`: Dataset/task name (e.g., "mbpp", "mmlu-sociology", "hellaswag") - `{model_name}`: Correctness score (0.0 or 1.0) for each LLM - `{model_name}|model_response`: The actual response text from each LLM - `{model_name}|total_cost`: Inference cost for each LLM - `oracle_model_to_route_to`: The optimal model for this query **Models in dataset:** - WizardLM/WizardLM-13B-V1.2 - claude-instant-v1, claude-v1, claude-v2 - gpt-3.5-turbo-1106, gpt-4-1106-preview - meta/code-llama-instruct-34b-chat, meta/llama-2-70b-chat - mistralai/mistral-7b-chat, mistralai/mixtral-8x7b-chat - zero-one-ai/Yi-34B-Chat **Example row (key columns only):** ``` sample_id: mmlu-sociology.val.78 prompt: "['Please answer with the letter...Which of the following best describes...?\nA) Ethnocentrism\nB) Institutionalization\nC) Stereotyping\nD) Scapegoating\n...']" eval_name: mmlu-sociology # Correctness (1.0 = correct, 0.0 = wrong): mistralai/mistral-7b-chat: 1.0 mistralai/mixtral-8x7b-chat: 1.0 gpt-4-1106-preview: 1.0 WizardLM/WizardLM-13B-V1.2: 1.0 # Costs: mistralai/mistral-7b-chat|total_cost: 1.74e-05 mistralai/mixtral-8x7b-chat|total_cost: 6.75e-05 gpt-4-1106-preview|total_cost: 0.00088 oracle_model_to_route_to: mistralai/mistral-7b-chat ``` In this example, all models answered correctly, but mistral-7b-chat has the lowest cost, so it's the oracle choice.
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "tag": "ai" }
llm_sql/large
research
Problem Setting --------------- Consider a CSV file with $N$ rows and $M$ columns, where $M \leq 10$. We feed each row to an LLM inference engine (with a prefix KV cache) by concatenating all column values in that row. For the $i$-th row with entries $A[i,1], A[i,2], \ldots, A[i,M]$, we construct the input string: ```math S_i = \text{Concat}(\text{string}(A[i,1]), \text{string}(A[i,2]), \ldots, \text{string}(A[i,M])) ```` When requesting $S_i$ for $i > 1$, the prefix KV-cache hit rate depends on the longest common prefix with any previously seen request: ```math \text{hit\_rate}_i = \frac{\max_{1 \le j < i} \text{LCP}(S_i, S_j)}{|S_i|} ``` where $LCP(S, T)$ is the length of the longest common prefix between strings $S$ and $T$. You are allowed to reorder the CSV columns. Let $p$ be a permutation of $\{1, 2, ..., M\}$. The reordered string for row $i$ becomes: ```math S'_i = \text{Concat}(\text{string}(A[i,p_1]), \text{string}(A[i,p_2]), \ldots, \text{string}(A[i,p_M])) ``` The goal is to choose a permutation $p$ that maximizes the overall KV-cache hit rate: ```math \max_p\; \frac{\sum_{i=2}^N \max_{1 \le j < i} \text{LCP}(S'_i, S'_j)} {\sum_{i=1}^N |S'_i|} ``` Target --- Maximize prefix hit rate shown above (higher is better) - **Hard Constraint**: Average runtime per dataset must be $\leq 10$ seconds (score = 0 if exceeded) and correctly handle column merge constraint. **Column Merges**: - Column merge specs are provided per dataset - Columns in each merge group are concatenated into a single column - The merged column replaces the original columns - Merge operations are applied before column reordering API Specification --- Implement a `Solution` class: ```python import pandas as pd class Solution: def solve( self, df: pd.DataFrame, early_stop: int = 100000, row_stop: int = 4, col_stop: int = 2, col_merge: list = None, one_way_dep: list = None, distinct_value_threshold: float = 0.7, parallel: bool = True, ) -> pd.DataFrame: """ Reorder columns in the DataFrame to maximize prefix hit rate. Args: df: Input DataFrame to optimize early_stop: Early stopping parameter (default: 100000) row_stop: Row stopping parameter (default: 4) col_stop: Column stopping parameter (default: 2) col_merge: List of column groups to merge (columns in each group are merged into one) one_way_dep: List of one-way dependencies (not used in this variant) distinct_value_threshold: Threshold for distinct values (default: 0.7) parallel: Whether to use parallel processing (default: True) Returns: DataFrame with reordered columns (same rows, different column order) """ # Your implementation pass ``` **Evaluation Process**: 1. Column merges are applied if specified 2. Your `solve()` method reorders the remaining columns 3. Rows are concatenated (no spaces) and prefix hit rate is calculated Scoring (0-100) --- baseline_hit_rate = Average prefix hit rate using original column order (0-point anchor) avg_hit_rate = Your solution's average prefix hit rate across all datasets For each dataset: dataset_score = ((hit_rate - baseline_hit_rate) / (1.0 - baseline_hit_rate)) × 100 final_score = Average of individual dataset scores Score is clamped to [0, 100] range **Runtime Constraint**: - Average runtime per dataset must be ≤ 10 seconds - If average runtime exceeds 10 seconds, score = 0.0 **Scoring Examples**: - baseline_hit_rate = 0.0 (worst), avg_hit_rate = 1.0 (perfect) → Score = 100 - baseline_hit_rate = 0.5, avg_hit_rate = 0.5 → Score = 0 - baseline_hit_rate = 0.5, avg_hit_rate = 0.75 → Score = 50 - baseline_hit_rate = 0.5, avg_hit_rate = 1.0 → Score = 100 Implementation Notes --- - Row values are concatenated without spaces: `"".join(row.values)` - Column reordering should optimize for maximum prefix overlap in the concatenated string representation - Consider column dependencies, distinct value distributions, and merge requirements when reordering - Large datasets with $M > 10$ columns require efficient algorithms due to larger search space - In our larger dataset, $50k \leq N \leq 100k$ and $4 \leq M \leq 9$ **Example input** please ignore the $> 10$ column number here --- ```csv ID,LIMIT_BAL,SEX,EDUCATION,MARRIAGE,AGE,PAY_0,PAY_2,PAY_3,PAY_4,PAY_5,PAY_6,BILL_AMT1,BILL_AMT2,BILL_AMT3,BILL_AMT4,BILL_AMT5,BILL_AMT6,PAY_AMT1,PAY_AMT2,PAY_AMT3,PAY_AMT4,PAY_AMT5,PAY_AMT6,default payment next month 1,20000,2,2,1,24,2,2,-1,-1,-2,-2,3913,3102,689,0,0,0,0,689,0,0,0,0,1 2,120000,2,2,2,26,-1,2,0,0,0,2,2682,1725,2682,3272,3455,3261,0,1000,1000,1000,0,2000,1 3,90000,2,2,2,34,0,0,0,0,0,0,29239,14027,13559,14331,14948,15549,1518,1500,1000,1000,1000,5000,0 4,50000,2,2,1,37,0,0,0,0,0,0,46990,48233,49291,28314,28959,29547,2000,2019,1200,1100,1069,1000,0 5,50000,1,2,1,57,-1,0,-1,0,0,0,8617,5670,35835,20940,19146,19131,2000,36681,10000,9000,689,679,0 ... ``` **Example output** --- ``` ID,LIMIT_BAL,SEX,EDUCATION,MARRIAGE,AGE,PAY_0,PAY_2,PAY_3,PAY_4,PAY_5,PAY_6,BILL_AMT1,BILL_AMT2,BILL_AMT3,BILL_AMT4,BILL_AMT5,BILL_AMT6,PAY_AMT1,PAY_AMT2,PAY_AMT3,PAY_AMT4,PAY_AMT5,PAY_AMT6,default payment next month 1,20000,2,2,1,24,2,2,-1,-1,-2,-2,3913,3102,689,0,0,0,0,689,0,0,0,0,1 2,120000,2,2,2,26,-1,2,0,0,0,2,2682,1725,2682,3272,3455,3261,0,1000,1000,1000,0,2000,1 3,90000,2,2,2,34,0,0,0,0,0,0,29239,14027,13559,14331,14948,15549,1518,1500,1000,1000,1000,5000,0 4,50000,2,2,1,37,0,0,0,0,0,0,46990,48233,49291,28314,28959,29547,2000,2019,1200,1100,1069,1000,0 5,50000,1,2,1,57,-1,0,-1,0,0,0,8617,5670,35835,20940,19146,19131,2000,36681,10000,9000,689,679,0 ``` ($p$ = $1, 2, \ldots M$)
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "runtime": { "timeout_seconds": 1800 }, "tag": "db" }
llm_sql/small
research
Problem Setting --------------- Consider a CSV file with $N$ rows and $M$ columns, where $M \leq 10$. We feed each row to an LLM inference engine (with a prefix KV cache) by concatenating all column values in that row. For the $i$-th row with entries $A[i,1], A[i,2], \ldots, A[i,M]$, we construct the input string: ```math S_i = \text{Concat}(\text{string}(A[i,1]), \text{string}(A[i,2]), \ldots, \text{string}(A[i,M])) ```` When requesting $S_i$ for $i > 1$, the prefix KV-cache hit rate depends on the longest common prefix with any previously seen request: ```math \text{hit\_rate}_i = \frac{\max_{1 \le j < i} \text{LCP}(S_i, S_j)}{|S_i|} ``` where $LCP(S, T)$ is the length of the longest common prefix between strings $S$ and $T$. You are allowed to reorder the CSV columns. Let $p$ be a permutation of $\{1, 2, ..., M\}$. The reordered string for row $i$ becomes: ```math S'_i = \text{Concat}(\text{string}(A[i,p_1]), \text{string}(A[i,p_2]), \ldots, \text{string}(A[i,p_M])) ``` The goal is to choose a permutation $p$ that maximizes the overall KV-cache hit rate: ```math \max_p\; \frac{\sum_{i=2}^N \max_{1 \le j < i} \text{LCP}(S'_i, S'_j)} {\sum_{i=1}^N |S'_i|} ``` Target --- Maximize prefix hit rate shown above (higher is better) - **Hard Constraint**: Average runtime per dataset must be $\leq 10$ seconds (score = 0 if exceeded) and correctly handle column merge constraint. **Column Merges**: - Column merge specs are provided per dataset - Columns in each merge group are concatenated into a single column - The merged column replaces the original columns - Merge operations are applied before column reordering API Specification --- Implement a `Solution` class: ```python import pandas as pd class Solution: def solve( self, df: pd.DataFrame, early_stop: int = 100000, row_stop: int = 4, col_stop: int = 2, col_merge: list = None, one_way_dep: list = None, distinct_value_threshold: float = 0.7, parallel: bool = True, ) -> pd.DataFrame: """ Reorder columns in the DataFrame to maximize prefix hit rate. Args: df: Input DataFrame to optimize early_stop: Early stopping parameter (default: 100000) row_stop: Row stopping parameter (default: 4) col_stop: Column stopping parameter (default: 2) col_merge: List of column groups to merge (columns in each group are merged into one) one_way_dep: List of one-way dependencies (not used in this variant) distinct_value_threshold: Threshold for distinct values (default: 0.7) parallel: Whether to use parallel processing (default: True) Returns: DataFrame with reordered columns (same rows, different column order) """ # Your implementation pass ``` **Evaluation Process**: 1. Column merges are applied if specified 2. Your `solve()` method reorders the remaining columns 3. Rows are concatenated (no spaces) and prefix hit rate is calculated Scoring (0-100) --- baseline_hit_rate = Average prefix hit rate using original column order (0-point anchor) avg_hit_rate = Your solution's average prefix hit rate across all datasets For each dataset: dataset_score = ((hit_rate - baseline_hit_rate) / (1.0 - baseline_hit_rate)) × 100 final_score = Average of individual dataset scores Score is clamped to [0, 100] range **Runtime Constraint**: - Average runtime per dataset must be ≤ 10 seconds - If average runtime exceeds 10 seconds, score = 0.0 **Scoring Examples**: - baseline_hit_rate = 0.0 (worst), avg_hit_rate = 1.0 (perfect) → Score = 100 - baseline_hit_rate = 0.5, avg_hit_rate = 0.5 → Score = 0 - baseline_hit_rate = 0.5, avg_hit_rate = 0.75 → Score = 50 - baseline_hit_rate = 0.5, avg_hit_rate = 1.0 → Score = 100 Implementation Notes --- - Row values are concatenated without spaces: `"".join(row.values)` - Column reordering should optimize for maximum prefix overlap in the concatenated string representation - Consider column dependencies, distinct value distributions, and merge requirements when reordering - Large datasets with $M > 10$ columns require efficient algorithms due to larger search space - In our smaller dataset, $15k \leq N \leq 28k$ and $4 \leq M \leq 9$ **Example input** please ignore the $> 10$ column number here --- ```csv ID,LIMIT_BAL,SEX,EDUCATION,MARRIAGE,AGE,PAY_0,PAY_2,PAY_3,PAY_4,PAY_5,PAY_6,BILL_AMT1,BILL_AMT2,BILL_AMT3,BILL_AMT4,BILL_AMT5,BILL_AMT6,PAY_AMT1,PAY_AMT2,PAY_AMT3,PAY_AMT4,PAY_AMT5,PAY_AMT6,default payment next month 1,20000,2,2,1,24,2,2,-1,-1,-2,-2,3913,3102,689,0,0,0,0,689,0,0,0,0,1 2,120000,2,2,2,26,-1,2,0,0,0,2,2682,1725,2682,3272,3455,3261,0,1000,1000,1000,0,2000,1 3,90000,2,2,2,34,0,0,0,0,0,0,29239,14027,13559,14331,14948,15549,1518,1500,1000,1000,1000,5000,0 4,50000,2,2,1,37,0,0,0,0,0,0,46990,48233,49291,28314,28959,29547,2000,2019,1200,1100,1069,1000,0 5,50000,1,2,1,57,-1,0,-1,0,0,0,8617,5670,35835,20940,19146,19131,2000,36681,10000,9000,689,679,0 ... ``` **Example output** --- ``` ID,LIMIT_BAL,SEX,EDUCATION,MARRIAGE,AGE,PAY_0,PAY_2,PAY_3,PAY_4,PAY_5,PAY_6,BILL_AMT1,BILL_AMT2,BILL_AMT3,BILL_AMT4,BILL_AMT5,BILL_AMT6,PAY_AMT1,PAY_AMT2,PAY_AMT3,PAY_AMT4,PAY_AMT5,PAY_AMT6,default payment next month 1,20000,2,2,1,24,2,2,-1,-1,-2,-2,3913,3102,689,0,0,0,0,689,0,0,0,0,1 2,120000,2,2,2,26,-1,2,0,0,0,2,2682,1725,2682,3272,3455,3261,0,1000,1000,1000,0,2000,1 3,90000,2,2,2,34,0,0,0,0,0,0,29239,14027,13559,14331,14948,15549,1518,1500,1000,1000,1000,5000,0 4,50000,2,2,1,37,0,0,0,0,0,0,46990,48233,49291,28314,28959,29547,2000,2019,1200,1100,1069,1000,0 5,50000,1,2,1,57,-1,0,-1,0,0,0,8617,5670,35835,20940,19146,19131,2000,36681,10000,9000,689,679,0 ``` ($p$ = $1, 2, \ldots M$)
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "runtime": { "timeout_seconds": 1800 }, "tag": "db" }
mamba2_scan
research
Mamba2 Scan Optimization Problem ================================== Problem Setting --------------- Design and optimize high-performance Triton kernels for Mamba2 scan computation on GPU. This problem focuses on implementing efficient sequential scan operations using chunked parallelism with Triton's JIT compilation system. The challenge involves optimizing: - **Sequential scan computation**: Efficient computation of y_t = a_t * y_{t-1} + b_t * x_t - **Chunked parallelism**: Processing sequences in chunks to enable parallelism while maintaining correctness - **State management**: Efficiently managing and propagating state between chunks - **Memory access patterns**: Efficient loading and storing of X, A, B tensors and state - **Block tiling**: Optimal block sizes for GPU execution across different sequence lengths - **Performance benchmarking**: Achieving speedup over baseline PyTorch implementations Target ------ - **Primary**: Maximize geometric mean speedup over baseline (higher is better) - **Secondary**: Ensure correctness across diverse sequence lengths and feature dimensions - **Tertiary**: Minimize kernel launch overhead and memory usage API Specification ----------------- Implement a `Solution` class that returns a Triton kernel implementation: ```python class Solution: def solve(self, spec_path: str = None) -> dict: """ Returns a dict with either: - {"code": "python_code_string"} - {"program_path": "path/to/kernel.py"} """ # Your implementation pass ``` Your kernel implementation must provide: ```python import torch import triton import triton.language as tl def chunk_scan(X: torch.Tensor, A: torch.Tensor, B: torch.Tensor, chunk: int = 128, BD: int = 128) -> torch.Tensor: """ Mamba2 chunked scan computation. Args: X: Input tensor of shape (L, D) - input sequence (float16) A: Input tensor of shape (L, D) - decay factors (float16) B: Input tensor of shape (L, D) - input weights (float16) chunk: Chunk size for parallel processing (default 128) BD: Block dimension for feature dimension tiling (default 128) Returns: Output tensor of shape (L, D) - scan output (float16) """ # Your implementation pass ``` Input Specifications -------------------- - **X**: Input tensor of shape `(L, D)` where: - `L`: Sequence length (tested with 2048, 4096) - `D`: Feature dimension (typically 512) - **A**: Decay factor tensor of shape `(L, D)` (float16, typically |A| < 0.5) - **B**: Input weight tensor of shape `(L, D)` (float16) - All inputs are `torch.float16` and on CUDA device - `chunk`: Chunk size for parallel processing (default 128) - `BD`: Block dimension for feature dimension tiling (default 128) - **Constraint**: L must be divisible by chunk Output Specifications -------------------- - Output tensor of shape `(L, D)` matching the input dimensions - Output dtype: `torch.float16` - Output device: Same as input (CUDA) Correctness Requirements ------------------------ - Numerical correctness verified against PyTorch baseline implementation - Relative tolerance: 1e-2, Absolute tolerance: 5e-3 - All test cases must pass for any score above 0 - Sequential dependency must be correctly maintained: y_t = a_t * y_{t-1} + b_t * x_t Scoring (0-100) --------------- Performance is measured against GPU baseline implementations: ``` geometric_mean_gpu_time = geometric_mean(gpu_baseline_times) geometric_mean_answer_time = geometric_mean(answer_times) # Linear interpolation: 0 points = 1x GPU baseline, 100 points = 200x GPU baseline target_time_0 = geometric_mean_gpu_time # 0 points (1x GPU baseline) target_time_100 = geometric_mean_gpu_time / 200.0 # 100 points (200x speedup over GPU) score = 100 * (target_time_0 - geometric_mean_answer_time) / (target_time_0 - target_time_100) ``` - 0 points = 1x GPU baseline performance - 100 points = 200x speedup over GPU baseline - Score is linearly interpolated between these two points Note: Correctness is verified against GPU baseline, and scoring spans from 1x GPU baseline (0 points) to 200x GPU baseline (100 points). Evaluation Details ------------------ - Test cases: L = 2048, 4096 (with D = 512) - Warmup phase: 10 iterations to stabilize GPU clocks and caches - Random seed: Fixed seed (0) for reproducible data generation - Strict correctness: Any test failure results in score of 0 - Chunk size: 128, BD: 128 Additional Notes ---------------- - The benchmark uses float32 for PyTorch baseline (for numerical stability) but float16 for answer evaluation - Sequential scan operation: y_t = a_t * y_{t-1} + b_t * x_t - Chunked parallelism: Process sequence in chunks, maintaining state between chunks - State propagation: State must be correctly propagated from one chunk to the next - Consider using block tiling along the feature dimension (BD) for parallelism
tag: hpc dependencies: uv_project: resources runtime: environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)" docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true
mixed_gemm
research
Mixed GEMM Optimization Problem ================================= Problem Setting --------------- Design and optimize high-performance Triton kernels for Mixed GEMM (Linear + Bias + GELU) computation on GPU. This problem focuses on implementing efficient fused kernels that combine matrix multiplication, bias addition, and GELU activation using Triton's JIT compilation system. The challenge involves optimizing: - **Fused computation**: Efficiently combining linear layer (X @ W + B) with GELU activation - **Memory access patterns**: Efficient loading and storing of X, W, B tensors - **Mixed precision**: Handling float16 inputs/outputs with float32 bias and accumulation - **GELU activation**: Implementing efficient GELU computation using CUDA libdevice functions - **Block tiling**: Optimal block sizes for GPU execution across different matrix sizes - **Performance benchmarking**: Achieving speedup over baseline PyTorch implementations Target ------ - **Primary**: Maximize geometric mean speedup over baseline (higher is better) - **Secondary**: Ensure correctness across diverse matrix sizes - **Tertiary**: Minimize kernel launch overhead and memory usage API Specification ----------------- Implement a `Solution` class that returns a Triton kernel implementation: ```python class Solution: def solve(self, spec_path: str = None) -> dict: """ Returns a dict with either: - {"code": "python_code_string"} - {"program_path": "path/to/kernel.py"} """ # Your implementation pass ``` Your kernel implementation must provide: ```python import torch import triton import triton.language as tl def linear_gelu(X: torch.Tensor, W: torch.Tensor, B: torch.Tensor) -> torch.Tensor: """ Linear layer with GELU activation computation. Args: X: Input tensor of shape (M, K) - input features (float16) W: Weight tensor of shape (K, N) - weight matrix (float16) B: Bias tensor of shape (N,) - bias vector (float32) Returns: Output tensor of shape (M, N) - output with GELU activation (float16) """ # Your implementation pass ``` Input Specifications -------------------- - **X**: Input tensor of shape `(M, K)` where: - `M`: Batch size (tested with 512, 1024) - `K`: Input feature dimension (typically 4096) - dtype: `torch.float16` - **W**: Weight tensor of shape `(K, N)` where: - `N`: Output feature dimension (typically 4096) - dtype: `torch.float16` - **B**: Bias tensor of shape `(N,)` where: - dtype: `torch.float32` - All inputs are on CUDA device Output Specifications -------------------- - Output tensor of shape `(M, N)` matching the input batch and output feature dimensions - Output dtype: `torch.float16` - Output device: Same as input (CUDA) Correctness Requirements ------------------------ - Numerical correctness verified against PyTorch baseline implementation - Relative tolerance: 1e-2, Absolute tolerance: 5e-3 - All test cases must pass for any score above 0 - GELU activation must be correctly implemented Scoring (0-100) --------------- Performance is measured against GPU baseline implementations: ``` geometric_mean_gpu_time = geometric_mean(gpu_baseline_times) geometric_mean_answer_time = geometric_mean(answer_times) # Linear interpolation: 0 points = 1x GPU baseline, 100 points = 3x GPU baseline target_time_0 = geometric_mean_gpu_time # 0 points (1x GPU baseline) target_time_100 = geometric_mean_gpu_time / 3.0 # 100 points (3x speedup over GPU) score = 100 * (target_time_0 - geometric_mean_answer_time) / (target_time_0 - target_time_100) ``` - 0 points = 1x GPU baseline performance - 100 points = 3x speedup over GPU baseline - Score is linearly interpolated between these two points Note: Correctness is verified against GPU baseline, and scoring spans from 1x GPU baseline (0 points) to 3x GPU baseline (100 points). Evaluation Details ------------------ - Test cases: M = 512, 1024 (with N = 4096, K = 4096) - Warmup phase: 10 iterations to stabilize GPU clocks and caches - Random seed: Fixed seed (0) for reproducible data generation - Strict correctness: Any test failure results in score of 0 Additional Notes ---------------- - The benchmark uses float32 for PyTorch baseline (for numerical stability) but float16 for answer evaluation - GELU formula: gelu(x) = x * 0.5 * (1.0 + erf(x * 0.7071067811865476)) - Consider using CUDA libdevice erf function: `tl.extra.cuda.libdevice.erf` - Accumulation should use float32 for numerical stability - Bias addition should be done after matrix multiplication but before GELU
dependencies: uv_project: resources tag: hpc runtime: environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)" docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true
nbody_simulation/random_100k
research
N-Body Simulation Problem - 100,000 Particles ============================================= Problem Setting --------------- Design and optimize a high-performance parallel N-body simulation. In physics and astronomy, an N-body simulation models the dynamics of particles under gravitational forces. The available hardware is an AWS c7i.4xlarge. The challenge involves optimizing: - **Loop parallelization**: Efficient parallel force computation across particles - **Acceleration structures**: Use structures such as quad-tree for O(N log N) instead of O(N²), or other structures. - **Load balancing**: Handling varying workloads per particle - **Parallel Programming Libraries**: Proper use of libraries like OpenMP This variant tests performance on **100,000 particles** with 3 simulation iterations. Target ------ - **Primary**: Ensure numerical correctness (tolerance: 1e-2) - **Secondary**: Maximize speedup over parallel brute-force baseline (higher is better) - **Tertiary**: Use algorithmic improvements (quad-tree, spatial hashing) to beat O(N²) Solution Format --------------- Submit a single C++ file (`.cpp`) that implements a `Simulator` class: ```cpp #include "world.h" #include <omp.h> class MySimulator : public Simulator { private: // Persistent state across simulation steps int numThreads = 8; // Could store acceleration structures, pre-allocated buffers, etc. public: void init(int numParticles, StepParameters params) override { // Called once before simulation starts // Set thread count, pre-allocate structures, etc. omp_set_num_threads(numThreads); } void simulateStep(std::vector<Particle> &particles, std::vector<Particle> &newParticles, StepParameters params) override { // Called each simulation step // For each particle i: // 1. Compute total force from particles within params.cullRadius // 2. Update particle using updateParticle() // 3. Store result in newParticles[i] } }; // Factory function - must be implemented Simulator* createSimulator() { return new MySimulator(); } ``` Provided Types and Functions (in world.h) ----------------------------------------- ```cpp struct Vec2 { float x, y; // Operators: +, -, *, length(), length2() }; struct Particle { int id; float mass; Vec2 position; Vec2 velocity; }; struct StepParameters { float deltaTime = 0.2f; float cullRadius = 1.0f; // Only consider particles within this distance }; // Simulator base class class Simulator { public: virtual ~Simulator() = default; virtual void init(int numParticles, StepParameters params) {} // Optional virtual void simulateStep(std::vector<Particle> &particles, std::vector<Particle> &newParticles, StepParameters params) = 0; // Required }; // Compute gravitational force between two particles // Returns Vec2(0,0) if distance > cullRadius or distance < 1e-3 inline Vec2 computeForce(const Particle &target, const Particle &attractor, float cullRadius) { auto dir = (attractor.position - target.position); auto dist = dir.length(); if (dist < 1e-3f) return Vec2(0.0f, 0.0f); dir *= (1.0f / dist); if (dist > cullRadius) return Vec2(0.0f, 0.0f); if (dist < 1e-1f) dist = 1e-1f; const float G = 0.01f; Vec2 force = dir * target.mass * attractor.mass * (G / (dist * dist)); if (dist > cullRadius * 0.75f) { float decay = 1.0f - (dist - cullRadius * 0.75f) / (cullRadius * 0.25f); force *= decay; } return force; } // Apply force to particle and integrate position/velocity inline Particle updateParticle(const Particle &pi, Vec2 force, float deltaTime) { Particle result = pi; result.velocity += force * (deltaTime / pi.mass); result.position += result.velocity * deltaTime; return result; } ``` Baseline -------- The baseline is a simple OpenMP parallel brute-force O(N²) implementation: ```cpp // Baseline for N-body simulation - simple OpenMP parallel brute-force // O(N²) approach with parallel outer loop // Solutions should aim to beat this baseline #include "world.h" #include <omp.h> class BaselineSimulator : public Simulator { private: int numThreads = 8; public: void init(int numParticles, StepParameters params) override { omp_set_num_threads(numThreads); } void simulateStep(std::vector<Particle> &particles, std::vector<Particle> &newParticles, StepParameters params) override { #pragma omp parallel for schedule(dynamic, 16) for (int i = 0; i < (int)particles.size(); i++) { auto pi = particles[i]; Vec2 force = Vec2(0.0f, 0.0f); for (size_t j = 0; j < particles.size(); j++) { if (j == (size_t)i) continue; if ((pi.position - particles[j].position).length() < params.cullRadius) { force += computeForce(pi, particles[j], params.cullRadius); } } newParticles[i] = updateParticle(pi, force, params.deltaTime); } } }; Simulator* createSimulator() { return new BaselineSimulator(); } ``` To beat the baseline, use algorithmic improvements like acceleration structures. Please generate a `.cpp` file that follows the solution's interface above, with the exact same signatures. The `Simulator` you write will be used in the following way: ```cpp double runSimulation(World& world, Simulator* sim, StepParameters params, int numIterations) { Timer timer; timer.reset(); // Initialize simulator at the start of each run (clean state) sim->init(world.particles.size(), params); for (int iter = 0; iter < numIterations; iter++) { world.newParticles.resize(world.particles.size()); sim->simulateStep(world.particles, world.newParticles, params); world.particles.swap(world.newParticles); } return timer.elapsed(); } ``` Compilation ----------- Your code is compiled with: ```bash g++ -O2 -fopenmp -std=c++17 -I. -o benchmark solution.cpp ``` Requirements: - Can use OpenMP for parallelization - Must implement a `Simulator` subclass and `createSimulator()` factory function - May define additional helper classes/functions as needed - Do NOT modify `computeForce` or `updateParticle` functions Correctness ----------- We will use the `BaselineSimulator` to get a reference particles positions and compare the solution you generated with the following code. We use a tolerance of `1e-2f`. If you fail the correctness check, you will get a score of zero. ```cpp bool checkForCorrectness(const World& refW, const World& w, float tolerance = 1e-2f) { if (w.particles.size() != refW.particles.size()) { std::cerr << "Mismatch: number of particles " << w.particles.size() << " does not match reference " << refW.particles.size() << std::endl; return false; } for (size_t i = 0; i < w.particles.size(); i++) { auto errorX = std::abs(w.particles[i].position.x - refW.particles[i].position.x); auto errorY = std::abs(w.particles[i].position.y - refW.particles[i].position.y); if (errorX > tolerance || errorY > tolerance) { std::cerr << "Mismatch at index " << i << ": result (" << w.particles[i].position.x << ", " << w.particles[i].position.y << ")" << " should be (" << refW.particles[i].position.x << ", " << refW.particles[i].position.y << ")" << std::endl; return false; } } return true; } ``` Scoring (0-100) --------------- Performance is measured by speedup over the parallel brute-force baseline: ``` speedup = baseline_time / solution_time raw_score = min(speedup, 10.0) # Cap at 10x speedup score = (raw_score - 1.0) / 9.0 * 100 # Map 1x-10x to 0-100 ``` - 0 points = No speedup (1x baseline performance) - ~11 points = 2x speedup - ~33 points = 4x speedup - ~56 points = 6x speedup - 100 points = 10x+ speedup Note: With 100k particles, algorithmic improvements can yield massive speedups. The brute-force baseline is extremely slow, so good solutions should achieve high speedups. Evaluation Details ------------------ - Tested with 100,000 particles - 3 simulation iterations - Space size: 100.0, cullRadius: 25.0 - Performance measured as median of 3 runs - Correctness verified with tolerance: position error < 1e-2 - Fixed random seed for reproducibility
dependencies: uv_project: resources tag: hpc runtime: timeout_seconds: 600 environment: "C++17 with OpenMP (GCC with libgomp1) on Ubuntu 22.04, 16 vCPUs" resources: cloud: aws instance_type: c7i.4xlarge cpus: "16" memory: "32"
nbody_simulation/random_10k
research
N-Body Simulation Problem - 10,000 Particles ============================================= Problem Setting --------------- Design and optimize a high-performance parallel N-body simulation. In physics and astronomy, an N-body simulation models the dynamics of particles under gravitational forces. The available hardware is an AWS c7i.4xlarge. The challenge involves optimizing: - **Loop parallelization**: Efficient parallel force computation across particles - **Acceleration structures**: Use structures such as quad-tree for O(N log N) instead of O(N²), or other structures. - **Load balancing**: Handling varying workloads per particle - **Parallel Programming Libraries**: Proper use of libraries like OpenMP This variant tests performance on **10,000 particles** with 5 simulation iterations. Target ------ - **Primary**: Ensure numerical correctness (tolerance: 1e-2) - **Secondary**: Maximize speedup over parallel brute-force baseline (higher is better) - **Tertiary**: Use algorithmic improvements (quad-tree, spatial hashing) to beat O(N²) Solution Format --------------- Submit a single C++ file (`.cpp`) that implements a `Simulator` class: ```cpp #include "world.h" #include <omp.h> class MySimulator : public Simulator { private: // Persistent state across simulation steps int numThreads = 8; // Could store acceleration structures, pre-allocated buffers, etc. public: void init(int numParticles, StepParameters params) override { // Called once before simulation starts // Set thread count, pre-allocate structures, etc. omp_set_num_threads(numThreads); } void simulateStep(std::vector<Particle> &particles, std::vector<Particle> &newParticles, StepParameters params) override { // Called each simulation step // For each particle i: // 1. Compute total force from particles within params.cullRadius // 2. Update particle using updateParticle() // 3. Store result in newParticles[i] } }; // Factory function - must be implemented Simulator* createSimulator() { return new MySimulator(); } ``` Provided Types and Functions (in world.h) ----------------------------------------- ```cpp struct Vec2 { float x, y; // Operators: +, -, *, length(), length2() }; struct Particle { int id; float mass; Vec2 position; Vec2 velocity; }; struct StepParameters { float deltaTime = 0.2f; float cullRadius = 1.0f; // Only consider particles within this distance }; // Simulator base class class Simulator { public: virtual ~Simulator() = default; virtual void init(int numParticles, StepParameters params) {} // Optional virtual void simulateStep(std::vector<Particle> &particles, std::vector<Particle> &newParticles, StepParameters params) = 0; // Required }; // Compute gravitational force between two particles // Returns Vec2(0,0) if distance > cullRadius or distance < 1e-3 inline Vec2 computeForce(const Particle &target, const Particle &attractor, float cullRadius) { auto dir = (attractor.position - target.position); auto dist = dir.length(); if (dist < 1e-3f) return Vec2(0.0f, 0.0f); dir *= (1.0f / dist); if (dist > cullRadius) return Vec2(0.0f, 0.0f); if (dist < 1e-1f) dist = 1e-1f; const float G = 0.01f; Vec2 force = dir * target.mass * attractor.mass * (G / (dist * dist)); if (dist > cullRadius * 0.75f) { float decay = 1.0f - (dist - cullRadius * 0.75f) / (cullRadius * 0.25f); force *= decay; } return force; } // Apply force to particle and integrate position/velocity inline Particle updateParticle(const Particle &pi, Vec2 force, float deltaTime) { Particle result = pi; result.velocity += force * (deltaTime / pi.mass); result.position += result.velocity * deltaTime; return result; } ``` Baseline -------- The baseline is a simple OpenMP parallel brute-force O(N²) implementation: ```cpp // Baseline for N-body simulation - simple OpenMP parallel brute-force // O(N²) approach with parallel outer loop // Solutions should aim to beat this baseline #include "world.h" #include <omp.h> class BaselineSimulator : public Simulator { private: int numThreads = 8; public: void init(int numParticles, StepParameters params) override { omp_set_num_threads(numThreads); } void simulateStep(std::vector<Particle> &particles, std::vector<Particle> &newParticles, StepParameters params) override { #pragma omp parallel for schedule(dynamic, 16) for (int i = 0; i < (int)particles.size(); i++) { auto pi = particles[i]; Vec2 force = Vec2(0.0f, 0.0f); for (size_t j = 0; j < particles.size(); j++) { if (j == (size_t)i) continue; if ((pi.position - particles[j].position).length() < params.cullRadius) { force += computeForce(pi, particles[j], params.cullRadius); } } newParticles[i] = updateParticle(pi, force, params.deltaTime); } } }; Simulator* createSimulator() { return new BaselineSimulator(); } ``` To beat the baseline, use algorithmic improvements like acceleration structures. Please generate a `.cpp` file that follows the solution's interface above, with the exact same signatures. The `Simulator` you write will be used in the following way: ```cpp double runSimulation(World& world, Simulator* sim, StepParameters params, int numIterations) { Timer timer; timer.reset(); // Initialize simulator at the start of each run (clean state) sim->init(world.particles.size(), params); for (int iter = 0; iter < numIterations; iter++) { world.newParticles.resize(world.particles.size()); sim->simulateStep(world.particles, world.newParticles, params); world.particles.swap(world.newParticles); } return timer.elapsed(); } ``` Compilation ----------- Your code is compiled with: ```bash g++ -O2 -fopenmp -std=c++17 -I. -o benchmark solution.cpp ``` Requirements: - Can use OpenMP for parallelization - Must implement a `Simulator` subclass and `createSimulator()` factory function - May define additional helper classes/functions as needed - Do NOT modify `computeForce` or `updateParticle` functions Correctness ----------- We will use the `BaselineSimulator` to get a reference particles positions and compare the solution you generated with the following code. We use a tolerance of `1e-2f`. If you fail the correctness check, you will get a score of zero. ```cpp bool checkForCorrectness(const World& refW, const World& w, float tolerance = 1e-2f) { if (w.particles.size() != refW.particles.size()) { std::cerr << "Mismatch: number of particles " << w.particles.size() << " does not match reference " << refW.particles.size() << std::endl; return false; } for (size_t i = 0; i < w.particles.size(); i++) { auto errorX = std::abs(w.particles[i].position.x - refW.particles[i].position.x); auto errorY = std::abs(w.particles[i].position.y - refW.particles[i].position.y); if (errorX > tolerance || errorY > tolerance) { std::cerr << "Mismatch at index " << i << ": result (" << w.particles[i].position.x << ", " << w.particles[i].position.y << ")" << " should be (" << refW.particles[i].position.x << ", " << refW.particles[i].position.y << ")" << std::endl; return false; } } return true; } ``` Scoring (0-100) --------------- Performance is measured by speedup over the parallel brute-force baseline: ``` speedup = baseline_time / solution_time raw_score = min(speedup, 3.0) # Cap at 3x speedup score = (raw_score - 1.0) / 2.0 * 100 # Map 1x-3x to 0-100 ``` - 0 points = No speedup (1x baseline performance) - 50 points = 2x speedup - 100 points = 3x+ speedup Note: Since baseline is already parallelized, achieving speedup requires algorithmic improvements. Evaluation Details ------------------ - Tested with 10,000 particles - 5 simulation iterations - Space size: 100.0, cullRadius: 25.0 - Performance measured as median of 3 runs - Correctness verified with tolerance: position error < 1e-2 - Fixed random seed for reproducibility
dependencies: uv_project: resources tag: hpc runtime: timeout_seconds: 600 environment: "C++17 with OpenMP (GCC with libgomp1) on Ubuntu 22.04, 16 vCPUs" resources: cloud: aws instance_type: c7i.4xlarge cpus: "16" memory: "32"
poc_generation/heap_buffer_overflow
research
{"tag": "security"}
poc_generation/heap_use_after_free
research
tag: security { "dependencies": { "uv_project": "resources" }, "datasets": [ "arvo:47101" ], "tag": "security" }
poc_generation/stack_buffer_overflow
research
{"tag": "security"}
poc_generation/uninitialized_value
research
{"tag": "security"}
qknorm
research
QKNorm Optimization Problem ============================ Problem Setting --------------- Design and optimize high-performance implementations for Query-Key Normalization (QKNorm) on GPU. This problem focuses on implementing efficient normalization kernels that apply RMSNorm to query and key tensors. This is a **memory-bound** (even **launch-bound**) **tiny operator**. Performance optimization requires careful attention to: 1. **Memory Efficiency**: Focus on **vectorized memory access patterns**. Minimize memory transactions and maximize memory bandwidth utilization. 2. **Operation Fusion**: **Avoid additional transpose/contiguous kernels**. Fuse operations to reduce kernel launch overhead and memory traffic. 3. **Non-Contiguous Input Handling**: **Be aware that inputs may be non-contiguous** due to weight-QKV fusion. Your implementation should efficiently handle non-contiguous memory layouts without triggering expensive memory copies. Target ------ - **Primary**: Ensure correctness across diverse tensor shapes - **Secondary**: Maximize geometric mean speedup over baseline (higher is better) - **Tertiary**: Minimize kernel launch overhead and memory usage API Specification ----------------- Implement a `Solution` class that returns a qknorm implementation: ```python class Solution: def solve(self, spec_path: str = None) -> dict: """ Returns a dict with either: - {"code": "python_code_string"} - {"program_path": "path/to/kernel.py"} """ # Your implementation pass ``` Your kernel implementation must provide: ```python import torch import flashinfer def qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor): """ Apply RMSNorm to query and key tensors. Args: q: Query tensor of arbitrary shape (will be reshaped to 2D) k: Key tensor of arbitrary shape (will be reshaped to 2D) norm_weight: Normalization weight tensor of shape (hidden_dim,) Returns: Tuple of (q_normalized, k_normalized) tensors """ pass ``` Required Default Implementation: ```python def default_qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor): q_2d = q.contiguous().view(-1, q.shape[-1]) k_2d = k.contiguous().view(-1, k.shape[-1]) q_o = torch.empty_like(q_2d) k_o = torch.empty_like(k_2d) flashinfer.norm.rmsnorm(q_2d, norm_weight, out=q_o) flashinfer.norm.rmsnorm(k_2d, norm_weight, out=k_o) return q_o.view(q.shape), k_o.view(k.shape) ``` Baseline Implementation: ```python def customized_qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor): q_o = torch.empty(q.shape, device=q.device, dtype=q.dtype) k_o = torch.empty(k.shape, device=k.device, dtype=k.dtype) flashinfer.norm.rmsnorm(q, norm_weight, out=q_o) flashinfer.norm.rmsnorm(k, norm_weight, out=k_o) return q_o, k_o ``` API Usage Notes --------------- - The evaluator looks for a `qknorm` function in the module namespace - Function must handle tensor reshaping correctly (q and k may have arbitrary shapes) - Must use flashinfer.norm.rmsnorm for normalization - Function returns a tuple of (q_normalized, k_normalized) tensors - **Important**: Inputs q and k may be **non-contiguous** due to weight-QKV fusion - **Avoid**: Additional `.contiguous()` or `.transpose()` calls that trigger memory copies - **Focus**: Vectorized memory access and operation fusion to minimize kernel launches Scoring (0-100) --------------- Performance is measured against baseline implementations: ``` geometric_mean_speedup = geometric_mean(answer_times / baseline_times) if speedup < 0.5 or correctness is wrong: score = 0 elif speedup >= 0.5 and speedup < 1.0: score = 50 elif speedup >= 1.0: score = 100 ``` - 0 points = Speedup < 0.5x OR correctness fails - 50 points = Speedup >= 0.5x and < 1.0x - 100 points = Speedup >= 1.0x Evaluation Details ------------------ - Shapes focus on diverse batch-sizes, head-dim, num-kv-heads, num-qo-heads, e.g.: - (16, 8, 32, 128) - (128, 32, 32, 64) - Correctness verified with tolerance: rtol=1e-2, atol=5e-3 - Performance measured using median execution time - Requires CUDA backend and GPU support
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "runtime": { "resources": { "accelerators": "L4:1" }, "docker": { "image": "andylizf/triton-tlx:tlx-nv-cu122-nvcc", "gpu": true }, "environment": "CUDA 12.2, Python 3.11, PyTorch 2.0+, flashinfer 0.5.0, Triton 3.0+" }, "tag": "hpc" }
quant_dot_int4
research
Quantized Dot (Int4 Packed) Optimization Problem ================================================ Problem Setting --------------- Design and optimize high-performance Triton kernels for a **quantized matrix multiplication** where the left-hand matrix is stored as **packed int4 weights** plus per-group scale/offset, and the right-hand matrix is fp16 activations. The challenge involves optimizing: - **Bit unpacking**: Efficiently unpacking int4 values from int32 lanes - **Dequantization fusion**: Fusing (unpack - offset) * scale directly into the dot product - **Memory access patterns**: Efficient access for packed weights / scales / activations - **Block tiling**: Choosing good block sizes for the small-K GEMM - **Performance benchmarking**: Achieving speedup over the baseline implementation Target ------ - **Primary**: Maximize geometric mean speedup over baseline (higher is better) - **Secondary**: Ensure correctness across diverse matrix shapes - **Tertiary**: Minimize kernel launch overhead and memory usage API Specification ----------------- Implement a `Solution` class that returns a Triton kernel implementation: ```python class Solution: def solve(self, spec_path: str = None) -> dict: """ Returns a dict with either: - {"code": "python_code_string"} - {"program_path": "path/to/kernel.py"} """ # Your implementation pass ``` Your kernel implementation must provide: ```python import torch import triton import triton.language as tl def quant_dot(scale: torch.Tensor, offset_packed: torch.Tensor, weight_packed: torch.Tensor, activation: torch.Tensor) -> torch.Tensor: """ Args: scale: float16/float32 tensor of shape (M, K/8) offset_packed: int32 tensor of shape (M,) Each int32 packs 8 int4 offsets (one per 8-wide group). weight_packed: int32 tensor of shape (M, K/8) Each int32 packs 8 int4 weights. activation: float16 tensor of shape (K, N) Returns: Output tensor of shape (M, N), dtype float16 """ pass ``` Semantics (matches Triton-Puzzles "Quantized Matrix Mult"): ---------------------------------------------------------- - Constants: `FPINT = 8` (8 int4 values per int32), `GROUP = 8`, so `K = FPINT * GROUP = 64`. - Unpack int4 weights: `w_int4` has shape (M, K) from `weight_packed`. - Unpack int4 offsets per row: `o_int4` has shape (M, FPINT) from `offset_packed`, then expanded to (M, K) by repeating each offset across `GROUP` lanes. - Expand scale similarly from shape (M, FPINT) to (M, K). - Dequantized A: `A = scale * (w_int4 - o_int4)` (float16/float32). - Output: `Z = A @ activation` (accumulate in fp32 recommended), return fp16. API Usage Notes --------------- - The evaluator looks for a `quant_dot` function in the module namespace - Must use Triton JIT compilation for kernel definition - Scale/activation are CUDA tensors; packed tensors are int32 CUDA tensors - `K` is fixed to 64 in evaluation Scoring (0-100) --------------- Performance is measured against baseline implementations: ``` geometric_mean_speedup = geometric_mean(answer_times / baseline_times) raw_score = min(geometric_mean_speedup, 3.0) # Cap at 3x speedup score = (raw_score - 1.0) / 2.0 * 100 # Map 1x-3x to 0-100 ``` - 0 points = No speedup (1x baseline performance) - 50 points = 2x speedup over baseline - 100 points = 3x+ speedup over baseline Evaluation Details ------------------ - K is fixed to 64. - Tested on multiple (M, N) shapes (see `resources/benchmark.py`). - Correctness verified with tolerance: rtol=1e-2, atol=5e-3. - Performance measured using median execution time via `triton.testing.do_bench`. - Requires CUDA backend and GPU support.
dependencies: uv_project: resources datasets: [] tag: hpc runtime: docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true
ragged_attention
research
Ragged Attention Optimization Problem ====================================== Problem Setting --------------- Design and optimize high-performance Triton kernels for ragged attention computation on GPU. This problem focuses on implementing efficient kernels that handle variable-length sequences using ragged attention, where each query row can attend to a different number of key/value rows. The challenge involves optimizing: - **Ragged attention**: Efficiently handling variable-length sequences where each row has different attention lengths - **Memory access patterns**: Efficient loading and storing of Q, K, V tensors with ragged masking - **Streaming softmax**: Computing softmax in a streaming fashion for numerical stability - **Row-wise masking**: Correctly masking attention scores based on row_lens - **Mixed precision**: Handling float16 inputs/outputs with float32 accumulation - **Block tiling**: Optimal block sizes for GPU execution across different matrix sizes - **Performance benchmarking**: Achieving speedup over baseline PyTorch implementations Target ------ - **Primary**: Maximize geometric mean speedup over baseline (higher is better) - **Secondary**: Ensure correctness across diverse matrix sizes and ragged lengths - **Tertiary**: Minimize kernel launch overhead and memory usage API Specification ----------------- Implement a `Solution` class that returns a Triton kernel implementation: ```python class Solution: def solve(self, spec_path: str = None) -> dict: """ Returns a dict with either: - {"code": "python_code_string"} - {"program_path": "path/to/kernel.py"} """ # Your implementation pass ``` Your kernel implementation must provide: ```python import torch import triton import triton.language as tl def ragged_attn(Q: torch.Tensor, K: torch.Tensor, V: torch.Tensor, row_lens: torch.Tensor) -> torch.Tensor: """ Ragged attention computation. Args: Q: Query tensor of shape (M, D) - query features (float16) K: Key tensor of shape (N, D) - key features (float16) V: Value tensor of shape (N, Dv) - value features (float16) row_lens: Row lengths tensor of shape (M,) - number of valid K/V rows per Q row (int32 or int64) Returns: Output tensor of shape (M, Dv) - attention output (float16) Semantics: For each query row i (0 <= i < M), compute attention over the first row_lens[i] key/value rows. Specifically: - scores[i, j] = (Q[i] @ K[j].T) * scale, for j < row_lens[i], else -inf - P[i] = softmax(scores[i]) - O[i] = P[i] @ V[:row_lens[i]] """ pass ``` Scoring ------- The scoring system evaluates your implementation based on geometric mean speedup over GPU baseline: - **0 points**: 1x GPU baseline (same speed as PyTorch GPU baseline) - **100 points**: 3x GPU baseline (3x speedup over PyTorch GPU baseline) - **Linear interpolation**: Scores between 0-100 are linearly interpolated based on speedup The evaluation uses the following test cases: - M (number of query rows): [512, 1024] - N (number of key/value rows): 1024 - D (model dimension): 64 - Dv (value dimension): 64 - row_lens: Random integers between [min_ratio*N, N] where min_ratio=0.25 Correctness is verified using: - Relative tolerance: 1e-2 - Absolute tolerance: 5e-3 All tests must pass for a non-zero score. If any test fails correctness, the score is 0. Example ------- ```python import torch import triton import triton.language as tl @triton.jit def _ragged_kernel(Q, K, V, O, ROW_LENS, ...): # Your kernel implementation pass def ragged_attn(Q: torch.Tensor, K: torch.Tensor, V: torch.Tensor, row_lens: torch.Tensor) -> torch.Tensor: # Your kernel launch logic pass ``` Constraints ----------- - All tensors must be CUDA tensors (float16 for Q, K, V; int32/int64 for row_lens) - Output must be float16 - The implementation must handle variable row lengths correctly - Accumulation should use float32 for numerical stability - Must use streaming softmax for numerical stability Tips ---- 1. Use efficient block tiling (BM, BN, BD, BDV) for optimal performance 2. Implement streaming softmax to handle large attention matrices 3. Correctly mask attention scores based on row_lens 4. Load row_lens once per program and broadcast for masking 5. Use proper masking for boundary conditions
dependencies: uv_project: resources tag: hpc runtime: environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)" docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true resources: accelerators: L4:1
symbolic_regression/mccormick
research
Symbolic Regression Benchmark - McCormick Dataset ================================================= Problem Setting --------------- Learn a closed-form symbolic expression `f(x1, x2)` that predicts the target `y`. This dataset is derived from the McCormick function, a classic 2D optimization test function featuring a combination of trigonometric and polynomial terms. The function exhibits a smooth, wavy surface with a global minimum. Input Format ------------ - Your `Solution.solve` receives: - `X`: numpy.ndarray of shape `(n, 2)` containing feature values - `y`: numpy.ndarray of shape `(n,)` containing target values - Dataset columns: `x1, x2, y` Output Specification -------------------- Implement a `Solution` class in `solution.py`: ```python import numpy as np class Solution: def __init__(self, **kwargs): pass def solve(self, X: np.ndarray, y: np.ndarray) -> dict: """ Args: X: Feature matrix of shape (n, 2) y: Target values of shape (n,) Returns: dict with keys: - "expression": str, a Python-evaluable expression using x1, x2 - "predictions": list/array of length n (optional) - "details": dict with optional "complexity" int """ # Example: fit a symbolic expression to the data expression = "x1 + x2" # placeholder return { "expression": expression, "predictions": None, # will be computed from expression if omitted "details": {} } ``` Expression Requirements: - Must be a valid Python expression string - Use variable names: `x1`, `x2` - Allowed operators: `+`, `-`, `*`, `/`, `**` - Allowed functions: `sin`, `cos`, `exp`, `log` - Numeric constants are allowed Dependencies (pinned versions) ------------------------------ ``` pysr==0.19.0 numpy==1.26.4 pandas==2.2.2 sympy==1.13.3 ``` Minimal Working Examples ------------------------ **Example 1: Using PySR (recommended)** ```python import numpy as np from pysr import PySRRegressor class Solution: def __init__(self, **kwargs): pass def solve(self, X: np.ndarray, y: np.ndarray) -> dict: model = PySRRegressor( niterations=40, binary_operators=["+", "-", "*", "/"], unary_operators=["sin", "cos", "exp", "log"], populations=15, population_size=33, maxsize=25, verbosity=0, progress=False, random_state=42, ) model.fit(X, y, variable_names=["x1", "x2"]) # Get best expression as sympy, convert to string best_expr = model.sympy() expression = str(best_expr) # Predictions predictions = model.predict(X) return { "expression": expression, "predictions": predictions.tolist(), "details": {} } ``` **Example 2: Manual expression (simple baseline)** ```python import numpy as np class Solution: def __init__(self, **kwargs): pass def solve(self, X: np.ndarray, y: np.ndarray) -> dict: # Simple linear combination as baseline x1, x2 = X[:, 0], X[:, 1] # Fit coefficients via least squares A = np.column_stack([x1, x2, np.ones_like(x1)]) coeffs, _, _, _ = np.linalg.lstsq(A, y, rcond=None) a, b, c = coeffs expression = f"{a:.6f}*x1 + {b:.6f}*x2 + {c:.6f}" predictions = a * x1 + b * x2 + c return { "expression": expression, "predictions": predictions.tolist(), "details": {} } ``` **Example 3: Using sympy for expression manipulation** ```python import numpy as np import sympy as sp from pysr import PySRRegressor class Solution: def __init__(self, **kwargs): pass def solve(self, X: np.ndarray, y: np.ndarray) -> dict: model = PySRRegressor( niterations=30, binary_operators=["+", "-", "*", "/"], unary_operators=["sin", "cos"], verbosity=0, progress=False, ) model.fit(X, y, variable_names=["x1", "x2"]) # Get sympy expression and simplify sympy_expr = model.sympy() simplified = sp.simplify(sympy_expr) # Convert to evaluable string expression = str(simplified) return { "expression": expression, "predictions": None, # evaluator will compute from expression "details": {} } ``` PySR API Notes (v0.19.0) ------------------------ - `model.fit(X, y, variable_names=["x1", "x2"])` - use variable_names to match expected output - `model.sympy()` - returns best expression as sympy object - `model.predict(X)` - returns predictions array - `model.equations_` - DataFrame of all discovered equations - Common parameters: - `niterations`: number of evolution iterations (more = better but slower) - `populations`: number of parallel populations - `maxsize`: maximum expression complexity - `verbosity=0, progress=False`: suppress output Expression Format Requirements ------------------------------ - Must be a valid Python expression string - Use variable names: `x1`, `x2` - Allowed operators: `+`, `-`, `*`, `/`, `**` - Allowed functions: `sin`, `cos`, `exp`, `log` (NO `np.` prefix) - Numeric constants are allowed - The evaluator uses `sympy.sympify()` to parse your expression Scoring ------- ``` MSE = (1/n) Σ (y_i - ŷ_i)² Score = 100 × clamp((m_base - MSE) / (m_base - m_ref), 0, 1) × 0.99^max(C - C_ref, 0) ``` - `m_base`: linear regression baseline MSE - `m_ref`, `C_ref`: reference solution MSE and complexity - `C = 2 × (#binary ops) + (#unary ops)` - Lower MSE and lower complexity yield higher scores Environment ----------- Run `set_up_env.sh` to install dependencies.
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "tag": "pl" }
symbolic_regression/mixed_polyexp_4d
research
Symbolic Regression Benchmark - Mixed PolyExp 4D Dataset ========================================================= Problem Setting --------------- Learn a closed-form symbolic expression `f(x1, x2, x3, x4)` that predicts the target `y`. This is a higher-dimensional dataset (4 input features) combining polynomial interactions with exponential decay. The function involves cross-terms between variables and Gaussian-like damping, making it more challenging than the 2D variants. Input Format ------------ - Your `Solution.solve` receives: - `X`: numpy.ndarray of shape `(n, 4)` containing feature values - `y`: numpy.ndarray of shape `(n,)` containing target values - Dataset columns: `x1, x2, x3, x4, y` Output Specification -------------------- Implement a `Solution` class in `solution.py`: ```python import numpy as np class Solution: def __init__(self, **kwargs): pass def solve(self, X: np.ndarray, y: np.ndarray) -> dict: """ Args: X: Feature matrix of shape (n, 4) y: Target values of shape (n,) Returns: dict with keys: - "expression": str, a Python-evaluable expression using x1, x2, x3, x4 - "predictions": list/array of length n (optional) - "details": dict with optional "complexity" int """ # Example: fit a symbolic expression to the data expression = "x1 + x2 + x3 + x4" # placeholder return { "expression": expression, "predictions": None, # will be computed from expression if omitted "details": {} } ``` Expression Requirements: - Must be a valid Python expression string - Use variable names: `x1`, `x2`, `x3`, `x4` - Allowed operators: `+`, `-`, `*`, `/`, `**` - Allowed functions: `sin`, `cos`, `exp`, `log` - Numeric constants are allowed Dependencies (pinned versions) ------------------------------ ``` pysr==0.19.0 numpy==1.26.4 pandas==2.2.2 sympy==1.13.3 ``` Minimal Working Examples ------------------------ **Example 1: Using PySR (recommended)** ```python import numpy as np from pysr import PySRRegressor class Solution: def __init__(self, **kwargs): pass def solve(self, X: np.ndarray, y: np.ndarray) -> dict: model = PySRRegressor( niterations=50, # more iterations for 4D binary_operators=["+", "-", "*", "/"], unary_operators=["sin", "cos", "exp", "log"], populations=20, population_size=40, maxsize=30, # larger for 4D complexity verbosity=0, progress=False, random_state=42, ) model.fit(X, y, variable_names=["x1", "x2", "x3", "x4"]) # Get best expression as sympy, convert to string best_expr = model.sympy() expression = str(best_expr) # Predictions predictions = model.predict(X) return { "expression": expression, "predictions": predictions.tolist(), "details": {} } ``` **Example 2: Manual expression (simple baseline)** ```python import numpy as np class Solution: def __init__(self, **kwargs): pass def solve(self, X: np.ndarray, y: np.ndarray) -> dict: # Simple linear combination as baseline x1, x2, x3, x4 = X[:, 0], X[:, 1], X[:, 2], X[:, 3] # Fit coefficients via least squares A = np.column_stack([x1, x2, x3, x4, np.ones_like(x1)]) coeffs, _, _, _ = np.linalg.lstsq(A, y, rcond=None) a, b, c, d, e = coeffs expression = f"{a:.6f}*x1 + {b:.6f}*x2 + {c:.6f}*x3 + {d:.6f}*x4 + {e:.6f}" predictions = a * x1 + b * x2 + c * x3 + d * x4 + e return { "expression": expression, "predictions": predictions.tolist(), "details": {} } ``` PySR API Notes (v0.19.0) ------------------------ - `model.fit(X, y, variable_names=["x1", "x2", "x3", "x4"])` - use variable_names to match expected output - `model.sympy()` - returns best expression as sympy object - `model.predict(X)` - returns predictions array - `model.equations_` - DataFrame of all discovered equations - Common parameters: - `niterations`: number of evolution iterations (more = better but slower) - `populations`: number of parallel populations - `maxsize`: maximum expression complexity - `verbosity=0, progress=False`: suppress output Expression Format Requirements ------------------------------ - Must be a valid Python expression string - Use variable names: `x1`, `x2`, `x3`, `x4` - Allowed operators: `+`, `-`, `*`, `/`, `**` - Allowed functions: `sin`, `cos`, `exp`, `log` (NO `np.` prefix) - Numeric constants are allowed - The evaluator uses `sympy.sympify()` to parse your expression Scoring ------- ``` MSE = (1/n) Σ (y_i - ŷ_i)² Score = 100 × clamp((m_base - MSE) / (m_base - m_ref), 0, 1) × 0.99^max(C - C_ref, 0) ``` - `m_base`: linear regression baseline MSE - `m_ref`, `C_ref`: reference solution MSE and complexity - `C = 2 × (#binary ops) + (#unary ops)` - Lower MSE and lower complexity yield higher scores Environment ----------- Run `set_up_env.sh` to install dependencies.
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "tag": "pl" }
symbolic_regression/peaks
research
Symbolic Regression Benchmark - Peaks Dataset ============================================== Problem Setting --------------- Learn a closed-form symbolic expression `f(x1, x2)` that predicts the target `y`. This dataset is based on a peaks-like function, characterized by exponential terms that create localized peaks and valleys across the 2D input space. The underlying function involves interactions between polynomial and exponential components. Input Format ------------ - Your `Solution.solve` receives: - `X`: numpy.ndarray of shape `(n, 2)` containing feature values - `y`: numpy.ndarray of shape `(n,)` containing target values - Dataset columns: `x1, x2, y` Output Specification -------------------- Implement a `Solution` class in `solution.py`: ```python import numpy as np class Solution: def __init__(self, **kwargs): pass def solve(self, X: np.ndarray, y: np.ndarray) -> dict: """ Args: X: Feature matrix of shape (n, 2) y: Target values of shape (n,) Returns: dict with keys: - "expression": str, a Python-evaluable expression using x1, x2 - "predictions": list/array of length n (optional) - "details": dict with optional "complexity" int """ # Example: fit a symbolic expression to the data expression = "x1 + x2" # placeholder return { "expression": expression, "predictions": None, # will be computed from expression if omitted "details": {} } ``` Expression Requirements: - Must be a valid Python expression string - Use variable names: `x1`, `x2` - Allowed operators: `+`, `-`, `*`, `/`, `**` - Allowed functions: `sin`, `cos`, `exp`, `log` - Numeric constants are allowed Dependencies (pinned versions) ------------------------------ ``` pysr==0.19.0 numpy==1.26.4 pandas==2.2.2 sympy==1.13.3 ``` Minimal Working Examples ------------------------ **Example 1: Using PySR (recommended)** ```python import numpy as np from pysr import PySRRegressor class Solution: def __init__(self, **kwargs): pass def solve(self, X: np.ndarray, y: np.ndarray) -> dict: model = PySRRegressor( niterations=40, binary_operators=["+", "-", "*", "/"], unary_operators=["sin", "cos", "exp", "log"], populations=15, population_size=33, maxsize=25, verbosity=0, progress=False, random_state=42, ) model.fit(X, y, variable_names=["x1", "x2"]) # Get best expression as sympy, convert to string best_expr = model.sympy() expression = str(best_expr) # Predictions predictions = model.predict(X) return { "expression": expression, "predictions": predictions.tolist(), "details": {} } ``` **Example 2: Manual expression (simple baseline)** ```python import numpy as np class Solution: def __init__(self, **kwargs): pass def solve(self, X: np.ndarray, y: np.ndarray) -> dict: # Simple linear combination as baseline x1, x2 = X[:, 0], X[:, 1] # Fit coefficients via least squares A = np.column_stack([x1, x2, np.ones_like(x1)]) coeffs, _, _, _ = np.linalg.lstsq(A, y, rcond=None) a, b, c = coeffs expression = f"{a:.6f}*x1 + {b:.6f}*x2 + {c:.6f}" predictions = a * x1 + b * x2 + c return { "expression": expression, "predictions": predictions.tolist(), "details": {} } ``` PySR API Notes (v0.19.0) ------------------------ - `model.fit(X, y, variable_names=["x1", "x2"])` - use variable_names to match expected output - `model.sympy()` - returns best expression as sympy object - `model.predict(X)` - returns predictions array - `model.equations_` - DataFrame of all discovered equations - Common parameters: - `niterations`: number of evolution iterations (more = better but slower) - `populations`: number of parallel populations - `maxsize`: maximum expression complexity - `verbosity=0, progress=False`: suppress output Expression Format Requirements ------------------------------ - Must be a valid Python expression string - Use variable names: `x1`, `x2` - Allowed operators: `+`, `-`, `*`, `/`, `**` - Allowed functions: `sin`, `cos`, `exp`, `log` (NO `np.` prefix) - Numeric constants are allowed - The evaluator uses `sympy.sympify()` to parse your expression Scoring ------- ``` MSE = (1/n) Σ (y_i - ŷ_i)² Score = 100 × clamp((m_base - MSE) / (m_base - m_ref), 0, 1) × 0.99^max(C - C_ref, 0) ``` - `m_base`: linear regression baseline MSE - `m_ref`, `C_ref`: reference solution MSE and complexity - `C = 2 × (#binary ops) + (#unary ops)` - Lower MSE and lower complexity yield higher scores Environment ----------- Run `set_up_env.sh` to install dependencies.
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "tag": "pl" }
symbolic_regression/ripple
research
Symbolic Regression Benchmark - Ripple Dataset =============================================== Problem Setting --------------- Learn a closed-form symbolic expression `f(x1, x2)` that predicts the target `y`. This dataset is generated from a ripple-like function that combines polynomial amplitude modulation with high-frequency trigonometric oscillations. The function creates concentric wave patterns with varying intensity across the domain. Input Format ------------ - Your `Solution.solve` receives: - `X`: numpy.ndarray of shape `(n, 2)` containing feature values - `y`: numpy.ndarray of shape `(n,)` containing target values - Dataset columns: `x1, x2, y` Output Specification -------------------- Implement a `Solution` class in `solution.py`: ```python import numpy as np class Solution: def __init__(self, **kwargs): pass def solve(self, X: np.ndarray, y: np.ndarray) -> dict: """ Args: X: Feature matrix of shape (n, 2) y: Target values of shape (n,) Returns: dict with keys: - "expression": str, a Python-evaluable expression using x1, x2 - "predictions": list/array of length n (optional) - "details": dict with optional "complexity" int """ # Example: fit a symbolic expression to the data expression = "x1 + x2" # placeholder return { "expression": expression, "predictions": None, # will be computed from expression if omitted "details": {} } ``` Expression Requirements: - Must be a valid Python expression string - Use variable names: `x1`, `x2` - Allowed operators: `+`, `-`, `*`, `/`, `**` - Allowed functions: `sin`, `cos`, `exp`, `log` - Numeric constants are allowed Dependencies (pinned versions) ------------------------------ ``` pysr==0.19.0 numpy==1.26.4 pandas==2.2.2 sympy==1.13.3 ``` Minimal Working Examples ------------------------ **Example 1: Using PySR (recommended)** ```python import numpy as np from pysr import PySRRegressor class Solution: def __init__(self, **kwargs): pass def solve(self, X: np.ndarray, y: np.ndarray) -> dict: model = PySRRegressor( niterations=40, binary_operators=["+", "-", "*", "/"], unary_operators=["sin", "cos", "exp", "log"], populations=15, population_size=33, maxsize=25, verbosity=0, progress=False, random_state=42, ) model.fit(X, y, variable_names=["x1", "x2"]) # Get best expression as sympy, convert to string best_expr = model.sympy() expression = str(best_expr) # Predictions predictions = model.predict(X) return { "expression": expression, "predictions": predictions.tolist(), "details": {} } ``` **Example 2: Manual expression (simple baseline)** ```python import numpy as np class Solution: def __init__(self, **kwargs): pass def solve(self, X: np.ndarray, y: np.ndarray) -> dict: # Simple linear combination as baseline x1, x2 = X[:, 0], X[:, 1] # Fit coefficients via least squares A = np.column_stack([x1, x2, np.ones_like(x1)]) coeffs, _, _, _ = np.linalg.lstsq(A, y, rcond=None) a, b, c = coeffs expression = f"{a:.6f}*x1 + {b:.6f}*x2 + {c:.6f}" predictions = a * x1 + b * x2 + c return { "expression": expression, "predictions": predictions.tolist(), "details": {} } ``` PySR API Notes (v0.19.0) ------------------------ - `model.fit(X, y, variable_names=["x1", "x2"])` - use variable_names to match expected output - `model.sympy()` - returns best expression as sympy object - `model.predict(X)` - returns predictions array - `model.equations_` - DataFrame of all discovered equations - Common parameters: - `niterations`: number of evolution iterations (more = better but slower) - `populations`: number of parallel populations - `maxsize`: maximum expression complexity - `verbosity=0, progress=False`: suppress output Expression Format Requirements ------------------------------ - Must be a valid Python expression string - Use variable names: `x1`, `x2` - Allowed operators: `+`, `-`, `*`, `/`, `**` - Allowed functions: `sin`, `cos`, `exp`, `log` (NO `np.` prefix) - Numeric constants are allowed - The evaluator uses `sympy.sympify()` to parse your expression Scoring ------- ``` MSE = (1/n) Σ (y_i - ŷ_i)² Score = 100 × clamp((m_base - MSE) / (m_base - m_ref), 0, 1) × 0.99^max(C - C_ref, 0) ``` - `m_base`: linear regression baseline MSE - `m_ref`, `C_ref`: reference solution MSE and complexity - `C = 2 × (#binary ops) + (#unary ops)` - Lower MSE and lower complexity yield higher scores Environment ----------- Run `set_up_env.sh` to install dependencies.
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "tag": "pl" }
symbolic_regression/sincos
research
Symbolic Regression Benchmark - SinCos Dataset =============================================== Problem Setting --------------- Learn a closed-form symbolic expression `f(x1, x2)` that predicts the target `y`. This dataset features a function built from basic trigonometric operations. The target exhibits periodic behavior in both input dimensions, representing a straightforward but fundamental pattern for symbolic regression. Input Format ------------ - Your `Solution.solve` receives: - `X`: numpy.ndarray of shape `(n, 2)` containing feature values - `y`: numpy.ndarray of shape `(n,)` containing target values - Dataset columns: `x1, x2, y` Output Specification -------------------- Implement a `Solution` class in `solution.py`: ```python import numpy as np class Solution: def __init__(self, **kwargs): pass def solve(self, X: np.ndarray, y: np.ndarray) -> dict: """ Args: X: Feature matrix of shape (n, 2) y: Target values of shape (n,) Returns: dict with keys: - "expression": str, a Python-evaluable expression using x1, x2 - "predictions": list/array of length n (optional) - "details": dict with optional "complexity" int """ # Example: fit a symbolic expression to the data expression = "x1 + x2" # placeholder return { "expression": expression, "predictions": None, # will be computed from expression if omitted "details": {} } ``` Expression Requirements: - Must be a valid Python expression string - Use variable names: `x1`, `x2` - Allowed operators: `+`, `-`, `*`, `/`, `**` - Allowed functions: `sin`, `cos`, `exp`, `log` - Numeric constants are allowed Dependencies (pinned versions) ------------------------------ ``` pysr==0.19.0 numpy==1.26.4 pandas==2.2.2 sympy==1.13.3 ``` Minimal Working Examples ------------------------ **Example 1: Using PySR (recommended)** ```python import numpy as np from pysr import PySRRegressor class Solution: def __init__(self, **kwargs): pass def solve(self, X: np.ndarray, y: np.ndarray) -> dict: model = PySRRegressor( niterations=40, binary_operators=["+", "-", "*", "/"], unary_operators=["sin", "cos", "exp", "log"], populations=15, population_size=33, maxsize=25, verbosity=0, progress=False, random_state=42, ) model.fit(X, y, variable_names=["x1", "x2"]) # Get best expression as sympy, convert to string best_expr = model.sympy() expression = str(best_expr) # Predictions predictions = model.predict(X) return { "expression": expression, "predictions": predictions.tolist(), "details": {} } ``` **Example 2: Manual expression (simple baseline)** ```python import numpy as np class Solution: def __init__(self, **kwargs): pass def solve(self, X: np.ndarray, y: np.ndarray) -> dict: # Simple linear combination as baseline x1, x2 = X[:, 0], X[:, 1] # Fit coefficients via least squares A = np.column_stack([x1, x2, np.ones_like(x1)]) coeffs, _, _, _ = np.linalg.lstsq(A, y, rcond=None) a, b, c = coeffs expression = f"{a:.6f}*x1 + {b:.6f}*x2 + {c:.6f}" predictions = a * x1 + b * x2 + c return { "expression": expression, "predictions": predictions.tolist(), "details": {} } ``` PySR API Notes (v0.19.0) ------------------------ - `model.fit(X, y, variable_names=["x1", "x2"])` - use variable_names to match expected output - `model.sympy()` - returns best expression as sympy object - `model.predict(X)` - returns predictions array - `model.equations_` - DataFrame of all discovered equations - Common parameters: - `niterations`: number of evolution iterations (more = better but slower) - `populations`: number of parallel populations - `maxsize`: maximum expression complexity - `verbosity=0, progress=False`: suppress output Expression Format Requirements ------------------------------ - Must be a valid Python expression string - Use variable names: `x1`, `x2` - Allowed operators: `+`, `-`, `*`, `/`, `**` - Allowed functions: `sin`, `cos`, `exp`, `log` (NO `np.` prefix) - Numeric constants are allowed - The evaluator uses `sympy.sympify()` to parse your expression Scoring ------- ``` MSE = (1/n) Σ (y_i - ŷ_i)² Score = 100 × clamp((m_base - MSE) / (m_base - m_ref), 0, 1) × 0.99^max(C - C_ref, 0) ``` - `m_base`: linear regression baseline MSE - `m_ref`, `C_ref`: reference solution MSE and complexity - `C = 2 × (#binary ops) + (#unary ops)` - Lower MSE and lower complexity yield higher scores Environment ----------- Run `set_up_env.sh` to install dependencies.
{ "dependencies": { "uv_project": "resources" }, "datasets": [], "tag": "pl" }
vdb_pareto/balanced
research
VDB Design Problem - Balanced Tier =================================== Problem Setting --------------- Design a Vector Database index optimized for **recall** subject to a **latency constraint**. This tier uses latency-gated scoring: solutions exceeding the latency threshold receive zero points, while solutions meeting the constraint are scored purely by recall@1. **Optimization Goal**: Maximize recall@1 within latency constraint $$ \text{score} = \begin{cases} 0 & \text{if } t_{\text{query}} > t_{\text{max}} \\ 100 & \text{if } t_{\text{query}} \leq t_{\text{max}} \text{ and } r \geq r_{\text{baseline}} \\ 100 \cdot \frac{r - r_{\text{min}}}{r_{\text{baseline}} - r_{\text{min}}} & \text{if } t_{\text{query}} \leq t_{\text{max}} \text{ and } r < r_{\text{baseline}} \end{cases} $$ Where: - $r$: Your recall@1 - $t_{\text{query}}$: Your average query latency (ms) - $r_{\text{baseline}} = 0.9914$ (baseline recall) - $r_{\text{min}} = 0.6939$ (minimum acceptable recall, 70% of baseline) - $t_{\text{max}} = 5.775\text{ms}$ (maximum allowed latency, 150% of baseline 3.85ms) **Key Insight**: Latency is a hard constraint. Only recall determines your score within the constraint. Baseline Performance -------------------- - Recall@1: **0.9914** (99.14%) - Avg query time: **3.85ms** - Baseline score: **100** (recall equals baseline within latency constraint) Scoring Examples ---------------- Assuming all solutions meet latency constraint ($t \leq 5.775\text{ms}$): | Recall@1 | Latency | Score Calculation | Score | |----------|---------|-------------------|-------| | 0.9914 | 3.85ms | $r = r_{\text{baseline}}$ → max score | **100** | | 0.9950 | 3.00ms | $r > r_{\text{baseline}}$ → max score | **100** | | 0.9500 | 2.50ms | $\frac{0.95 - 0.6939}{0.9914 - 0.6939} = 0.860$ | **86.0** | | 0.8500 | 4.00ms | $\frac{0.85 - 0.6939}{0.9914 - 0.6939} = 0.524$ | **52.4** | | 0.6939 | 5.00ms | $r = r_{\text{min}}$ → minimum score | **0** | | 0.9900 | **6.00ms** | $t > t_{\text{max}}$ → latency gate fails | **0** | **Note**: Faster latency does NOT increase score - only recall matters if constraint is met. API Specification ----------------- Implement a class with the following interface: ```python import numpy as np from typing import Tuple class YourIndexClass: def __init__(self, dim: int, **kwargs): """ Initialize the index for vectors of dimension `dim`. Args: dim: Vector dimensionality (e.g., 128 for SIFT1M) **kwargs: Optional parameters (e.g., M, ef_construction for HNSW) Example: index = YourIndexClass(dim=128, M=16, ef_search=64) """ pass def add(self, xb: np.ndarray) -> None: """ Add vectors to the index. Args: xb: Base vectors, shape (N, dim), dtype float32 Notes: - Can be called multiple times (cumulative) - Must handle large N (e.g., 1,000,000 vectors) Example: index.add(xb) # xb.shape = (1000000, 128) """ pass def search(self, xq: np.ndarray, k: int) -> Tuple[np.ndarray, np.ndarray]: """ Search for k nearest neighbors of query vectors. Args: xq: Query vectors, shape (nq, dim), dtype float32 k: Number of nearest neighbors to return Returns: (distances, indices): - distances: shape (nq, k), dtype float32, L2 distances - indices: shape (nq, k), dtype int64, indices into base vectors Notes: - Must return exactly k neighbors per query - Indices should refer to positions in the vectors passed to add() - Lower distance = more similar Example: D, I = index.search(xq, k=1) # xq.shape = (10000, 128) # D.shape = (10000, 1), I.shape = (10000, 1) """ pass ``` **Implementation Requirements**: - Class can have any name (evaluator auto-discovers classes with `add` and `search` methods) - Must handle SIFT1M dataset: 1M base vectors, 10K queries, 128 dimensions - Your `search` must return tuple `(distances, indices)` with shapes `(nq, k)` - Distances should be L2 (Euclidean) or L2-squared - No need to handle dataset loading - evaluator provides numpy arrays Evaluation Process ------------------ The evaluator follows these steps: ### 1. Load Dataset ```python from faiss.contrib.datasets import DatasetSIFT1M ds = DatasetSIFT1M() xb = ds.get_database() # (1000000, 128) float32 xq = ds.get_queries() # (10000, 128) float32 gt = ds.get_groundtruth() # (10000, 100) int64 - ground truth indices ``` ### 2. Build Index ```python from solution import YourIndexClass # Auto-discovered d = xb.shape[1] # 128 for SIFT1M index = YourIndexClass(d) # Pass dimension as first argument index.add(xb) # Add all 1M base vectors ``` ### 3. Measure Performance (Batch Queries) ```python import time t0 = time.time() D, I = index.search(xq, k=1) # Search all 10K queries at once t1 = time.time() # Calculate metrics recall_at_1 = (I[:, :1] == gt[:, :1]).sum() / len(xq) avg_query_time_ms = (t1 - t0) * 1000.0 / len(xq) ``` **Important**: `avg_query_time_ms` from **batch queries** is used for scoring. Batch queries benefit from CPU cache and vectorization, typically faster than single queries. ### 4. Calculate Score ```python if avg_query_time_ms > 5.775: score = 0.0 elif recall_at_1 >= 0.9914: score = 100.0 else: recall_range = 0.9914 - 0.6939 recall_proportion = (recall_at_1 - 0.6939) / recall_range score = max(0.0, min(100.0, 100.0 * recall_proportion)) ``` Dataset Details --------------- - **Name**: SIFT1M - **Base vectors**: 1,000,000 vectors of dimension 128 - **Query vectors**: 10,000 vectors - **Ground truth**: Precomputed nearest neighbors (k=1) - **Metric**: L2 (Euclidean distance) - **Vector type**: float32 Runtime Platform ---------------- - **Infrastructure**: Evaluations run on SkyPilot-managed cloud instances (AWS, GCP, or Azure) - **Compute**: CPU-only instances (no GPU required) - **Environment**: Docker containerized execution with Python 3, NumPy ≥1.24, FAISS-CPU ≥1.7.4 Constraints ----------- - **Timeout**: 1 hour for entire evaluation (index construction + queries) - **Memory**: Use reasonable memory (index should fit in RAM) - **Latency constraint**: avg_query_time_ms ≤ 5.775ms - **Recall range**: 0.6939 ≤ recall@1 ≤ 1.0 Strategy Tips ------------- 1. **Focus on recall**: Latency only needs to meet threshold, doesn't improve score beyond that 2. **Batch optimization is key**: Your `search` should handle batch queries efficiently 3. **Parameter tuning**: Small changes (e.g., HNSW's M, ef_search) significantly affect recall 4. **Don't over-optimize latency**: Meeting 5.775ms is enough; focus energy on recall Example: Simple Baseline ------------------------- ```python import numpy as np class SimpleIndex: def __init__(self, dim: int, **kwargs): self.dim = dim self.xb = None def add(self, xb: np.ndarray) -> None: if self.xb is None: self.xb = xb.copy() else: self.xb = np.vstack([self.xb, xb]) def search(self, xq: np.ndarray, k: int) -> tuple: # Compute all pairwise L2 distances # xq: (nq, dim), xb: (N, dim) # distances: (nq, N) distances = np.sqrt(((xq[:, np.newaxis, :] - self.xb[np.newaxis, :, :]) ** 2).sum(axis=2)) # Get k nearest neighbors indices = np.argpartition(distances, k-1, axis=1)[:, :k] sorted_indices = np.argsort(distances[np.arange(len(xq))[:, None], indices], axis=1) final_indices = indices[np.arange(len(xq))[:, None], sorted_indices] final_distances = distances[np.arange(len(xq))[:, None], final_indices] return final_distances, final_indices ``` **Note**: This baseline achieves perfect recall (100%) but is too slow for large datasets. Use approximate methods like HNSW, IVF, or LSH for better speed-recall tradeoffs. Debugging Tips -------------- - **Test locally**: Use a subset of data (e.g., 10K vectors) for faster iteration - **Verify shapes**: Ensure `search` returns `(nq, k)` shaped arrays - **Check recall calculation**: `(I[:, :1] == gt[:, :1]).sum() / len(xq)` - **Profile latency**: Measure batch vs single query performance separately - **Validate before submit**: Run full 1M dataset locally if possible
{ "dependencies": { "uv_project": "resources" }, "datasets": [ { "type": "local_tar", "path": "resources/sift.tar.gz", "target": "data/sift1M", "expected_glob": "*.fvecs" } ], "runtime": { "timeout_seconds": 3600 }, "tag": "db" }
vdb_pareto/high_recall
research
VDB Design Problem - High Recall Tier ====================================== Problem Setting --------------- Design a Vector Database index optimized for **recall** subject to a **relaxed latency constraint**. This tier uses latency-gated scoring: solutions exceeding the latency threshold receive zero points, while solutions meeting the constraint are scored purely by recall@1. **Optimization Goal**: Maximize recall@1 within latency constraint $$ \text{score} = \begin{cases} 0 & \text{if } t_{\text{query}} > t_{\text{max}} \\ 100 & \text{if } t_{\text{query}} \leq t_{\text{max}} \text{ and } r \geq r_{\text{baseline}} \\ 100 \cdot \frac{r - r_{\text{min}}}{r_{\text{baseline}} - r_{\text{min}}} & \text{if } t_{\text{query}} \leq t_{\text{max}} \text{ and } r < r_{\text{baseline}} \end{cases} $$ Where: - $r$: Your recall@1 - $t_{\text{query}}$: Your average query latency (ms) - $r_{\text{baseline}} = 0.9914$ (baseline recall) - $r_{\text{min}} = 0.9409$ (minimum acceptable recall, 95% of baseline) - $t_{\text{max}} = 7.7\text{ms}$ (maximum allowed latency, 200% of baseline 3.85ms) **Key Insight**: This tier provides 2× latency budget compared to balanced tier, allowing more thorough search for higher recall. Baseline Performance -------------------- - Recall@1: **0.9914** (99.14%) - Avg query time: **3.85ms** - Baseline score: **100** (recall equals baseline within latency constraint) Scoring Examples ---------------- Assuming all solutions meet latency constraint ($t \leq 7.7\text{ms}$): | Recall@1 | Latency | Score Calculation | Score | |----------|---------|-------------------|-------| | 0.9914 | 3.85ms | $r = r_{\text{baseline}}$ → max score | **100** | | 0.9950 | 5.00ms | $r > r_{\text{baseline}}$ → max score | **100** | | 0.9700 | 6.00ms | $\frac{0.97 - 0.9409}{0.9914 - 0.9409} = 0.576$ | **57.6** | | 0.9500 | 4.00ms | $\frac{0.95 - 0.9409}{0.9914 - 0.9409} = 0.180$ | **18.0** | | 0.9409 | 7.00ms | $r = r_{\text{min}}$ → minimum score | **0** | | 0.9914 | **8.00ms** | $t > t_{\text{max}}$ → latency gate fails | **0** | **Note**: The relaxed latency constraint (7.7ms vs 5.775ms in balanced) allows more aggressive search strategies for higher recall. API Specification ----------------- Implement a class with the following interface: ```python import numpy as np from typing import Tuple class YourIndexClass: def __init__(self, dim: int, **kwargs): """ Initialize the index for vectors of dimension `dim`. Args: dim: Vector dimensionality (e.g., 128 for SIFT1M) **kwargs: Optional parameters (e.g., M, ef_construction for HNSW) Example: index = YourIndexClass(dim=128, M=64, ef_search=800) """ pass def add(self, xb: np.ndarray) -> None: """ Add vectors to the index. Args: xb: Base vectors, shape (N, dim), dtype float32 Notes: - Can be called multiple times (cumulative) - Must handle large N (e.g., 1,000,000 vectors) Example: index.add(xb) # xb.shape = (1000000, 128) """ pass def search(self, xq: np.ndarray, k: int) -> Tuple[np.ndarray, np.ndarray]: """ Search for k nearest neighbors of query vectors. Args: xq: Query vectors, shape (nq, dim), dtype float32 k: Number of nearest neighbors to return Returns: (distances, indices): - distances: shape (nq, k), dtype float32, L2 distances - indices: shape (nq, k), dtype int64, indices into base vectors Notes: - Must return exactly k neighbors per query - Indices should refer to positions in the vectors passed to add() - Lower distance = more similar Example: D, I = index.search(xq, k=1) # xq.shape = (10000, 128) # D.shape = (10000, 1), I.shape = (10000, 1) """ pass ``` **Implementation Requirements**: - Class can have any name (evaluator auto-discovers classes with `add` and `search` methods) - Must handle SIFT1M dataset: 1M base vectors, 10K queries, 128 dimensions - Your `search` must return tuple `(distances, indices)` with shapes `(nq, k)` - Distances should be L2 (Euclidean) or L2-squared - No need to handle dataset loading - evaluator provides numpy arrays Evaluation Process ------------------ The evaluator follows these steps: ### 1. Load Dataset ```python from faiss.contrib.datasets import DatasetSIFT1M ds = DatasetSIFT1M() xb = ds.get_database() # (1000000, 128) float32 xq = ds.get_queries() # (10000, 128) float32 gt = ds.get_groundtruth() # (10000, 100) int64 - ground truth indices ``` ### 2. Build Index ```python from solution import YourIndexClass # Auto-discovered d = xb.shape[1] # 128 for SIFT1M index = YourIndexClass(d) # Pass dimension as first argument index.add(xb) # Add all 1M base vectors ``` ### 3. Measure Performance (Batch Queries) ```python import time t0 = time.time() D, I = index.search(xq, k=1) # Search all 10K queries at once t1 = time.time() # Calculate metrics recall_at_1 = (I[:, :1] == gt[:, :1]).sum() / len(xq) avg_query_time_ms = (t1 - t0) * 1000.0 / len(xq) ``` **Important**: `avg_query_time_ms` from **batch queries** is used for scoring. Batch queries benefit from CPU cache and vectorization, typically faster than single queries. ### 4. Calculate Score ```python if avg_query_time_ms > 7.7: score = 0.0 elif recall_at_1 >= 0.9914: score = 100.0 else: recall_range = 0.9914 - 0.9409 recall_proportion = (recall_at_1 - 0.9409) / recall_range score = max(0.0, min(100.0, 100.0 * recall_proportion)) ``` Dataset Details --------------- - **Name**: SIFT1M - **Base vectors**: 1,000,000 vectors of dimension 128 - **Query vectors**: 10,000 vectors - **Ground truth**: Precomputed nearest neighbors (k=1) - **Metric**: L2 (Euclidean distance) - **Vector type**: float32 Runtime Platform ---------------- - **Infrastructure**: Evaluations run on SkyPilot-managed cloud instances (AWS, GCP, or Azure) - **Compute**: CPU-only instances (no GPU required) - **Environment**: Docker containerized execution with Python 3, NumPy ≥1.24, FAISS-CPU ≥1.7.4 Constraints ----------- - **Timeout**: 1 hour for entire evaluation (index construction + queries) - **Memory**: Use reasonable memory (index should fit in RAM) - **Latency constraint**: avg_query_time_ms ≤ 7.7ms - **Recall range**: 0.9409 ≤ recall@1 ≤ 1.0 Strategy Tips ------------- 1. **Maximize recall**: Use 2× latency budget (7.7ms vs 5.775ms balanced) for more thorough search 2. **Batch optimization is key**: Your `search` should handle batch queries efficiently 3. **Parameter tuning for recall**: Higher HNSW efSearch (500-1000) or IVF nprobe (100-200) 4. **Trade latency for accuracy**: Unlike balanced tier, you can afford slower but more accurate search Example: Simple Baseline ------------------------- ```python import numpy as np class SimpleIndex: def __init__(self, dim: int, **kwargs): self.dim = dim self.xb = None def add(self, xb: np.ndarray) -> None: if self.xb is None: self.xb = xb.copy() else: self.xb = np.vstack([self.xb, xb]) def search(self, xq: np.ndarray, k: int) -> tuple: # Compute all pairwise L2 distances # xq: (nq, dim), xb: (N, dim) # distances: (nq, N) distances = np.sqrt(((xq[:, np.newaxis, :] - self.xb[np.newaxis, :, :]) ** 2).sum(axis=2)) # Get k nearest neighbors indices = np.argpartition(distances, k-1, axis=1)[:, :k] sorted_indices = np.argsort(distances[np.arange(len(xq))[:, None], indices], axis=1) final_indices = indices[np.arange(len(xq))[:, None], sorted_indices] final_distances = distances[np.arange(len(xq))[:, None], final_indices] return final_distances, final_indices ``` **Note**: This baseline achieves perfect recall (100%) but is too slow for large datasets. Use approximate methods like HNSW, IVF, or LSH for better speed-recall tradeoffs. Debugging Tips -------------- - **Test locally**: Use a subset of data (e.g., 10K vectors) for faster iteration - **Verify shapes**: Ensure `search` returns `(nq, k)` shaped arrays - **Check recall calculation**: `(I[:, :1] == gt[:, :1]).sum() / len(xq)` - **Profile latency**: Measure batch vs single query performance separately - **Validate before submit**: Run full 1M dataset locally if possible
{ "dependencies": { "uv_project": "resources" }, "datasets": [ { "type": "local_tar", "path": "resources/sift.tar.gz", "target": "data/sift1M", "expected_glob": "*.fvecs" } ], "runtime": { "timeout_seconds": 3600 }, "tag": "db" }
vdb_pareto/low_latency
research
VDB Design Problem - Low Latency Tier ====================================== Problem Setting --------------- Design a Vector Database index optimized for **recall** subject to a **strict latency constraint**. This tier uses latency-gated scoring: solutions exceeding the latency threshold receive zero points, while solutions meeting the constraint are scored purely by recall@1. **Optimization Goal**: Maximize recall@1 within latency constraint $$ \text{score} = \begin{cases} 0 & \text{if } t_{\text{query}} > t_{\text{max}} \\ 100 & \text{if } t_{\text{query}} \leq t_{\text{max}} \text{ and } r \geq r_{\text{baseline}} \\ 100 \cdot \frac{r - r_{\text{min}}}{r_{\text{baseline}} - r_{\text{min}}} & \text{if } t_{\text{query}} \leq t_{\text{max}} \text{ and } r < r_{\text{baseline}} \end{cases} $$ Where: - $r$: Your recall@1 - $t_{\text{query}}$: Your average query latency (ms) - $r_{\text{baseline}} = 0.9914$ (baseline recall) - $r_{\text{min}} = 0.7931$ (minimum acceptable recall, 80% of baseline) - $t_{\text{max}} = 2.31\text{ms}$ (maximum allowed latency, 60% of baseline 3.85ms) **Key Insight**: This tier has a very strict latency constraint (60% of baseline), requiring aggressive approximation while maintaining reasonable recall. Baseline Performance -------------------- - Recall@1: **0.9914** (99.14%) - Avg query time: **3.85ms** - Baseline score: **100** (recall equals baseline within latency constraint) Scoring Examples ---------------- Assuming all solutions meet latency constraint ($t \leq 2.31\text{ms}$): | Recall@1 | Latency | Score Calculation | Score | |----------|---------|-------------------|-------| | 0.9914 | 2.00ms | $r = r_{\text{baseline}}$ → max score | **100** | | 0.9500 | 2.00ms | $\frac{0.95 - 0.7931}{0.9914 - 0.7931} = 0.791$ | **79.1** | | 0.9000 | 1.50ms | $\frac{0.90 - 0.7931}{0.9914 - 0.7931} = 0.539$ | **53.9** | | 0.8500 | 1.00ms | $\frac{0.85 - 0.7931}{0.9914 - 0.7931} = 0.287$ | **28.7** | | 0.7931 | 2.00ms | $r = r_{\text{min}}$ → minimum score | **0** | | 0.9500 | **2.50ms** | $t > t_{\text{max}}$ → latency gate fails | **0** | **Note**: The strict latency constraint (2.31ms vs 5.775ms in balanced) requires aggressive approximation, typically resulting in lower recall. API Specification ----------------- Implement a class with the following interface: ```python import numpy as np from typing import Tuple class YourIndexClass: def __init__(self, dim: int, **kwargs): """ Initialize the index for vectors of dimension `dim`. Args: dim: Vector dimensionality (e.g., 128 for SIFT1M) **kwargs: Optional parameters (e.g., M, ef_construction for HNSW) Example: index = YourIndexClass(dim=128, M=16, ef_search=80) """ pass def add(self, xb: np.ndarray) -> None: """ Add vectors to the index. Args: xb: Base vectors, shape (N, dim), dtype float32 Notes: - Can be called multiple times (cumulative) - Must handle large N (e.g., 1,000,000 vectors) Example: index.add(xb) # xb.shape = (1000000, 128) """ pass def search(self, xq: np.ndarray, k: int) -> Tuple[np.ndarray, np.ndarray]: """ Search for k nearest neighbors of query vectors. Args: xq: Query vectors, shape (nq, dim), dtype float32 k: Number of nearest neighbors to return Returns: (distances, indices): - distances: shape (nq, k), dtype float32, L2 distances - indices: shape (nq, k), dtype int64, indices into base vectors Notes: - Must return exactly k neighbors per query - Indices should refer to positions in the vectors passed to add() - Lower distance = more similar Example: D, I = index.search(xq, k=1) # xq.shape = (10000, 128) # D.shape = (10000, 1), I.shape = (10000, 1) """ pass ``` **Implementation Requirements**: - Class can have any name (evaluator auto-discovers classes with `add` and `search` methods) - Must handle SIFT1M dataset: 1M base vectors, 10K queries, 128 dimensions - Your `search` must return tuple `(distances, indices)` with shapes `(nq, k)` - Distances should be L2 (Euclidean) or L2-squared - No need to handle dataset loading - evaluator provides numpy arrays Evaluation Process ------------------ The evaluator follows these steps: ### 1. Load Dataset ```python from faiss.contrib.datasets import DatasetSIFT1M ds = DatasetSIFT1M() xb = ds.get_database() # (1000000, 128) float32 xq = ds.get_queries() # (10000, 128) float32 gt = ds.get_groundtruth() # (10000, 100) int64 - ground truth indices ``` ### 2. Build Index ```python from solution import YourIndexClass # Auto-discovered d = xb.shape[1] # 128 for SIFT1M index = YourIndexClass(d) # Pass dimension as first argument index.add(xb) # Add all 1M base vectors ``` ### 3. Measure Performance (Batch Queries) ```python import time t0 = time.time() D, I = index.search(xq, k=1) # Search all 10K queries at once t1 = time.time() # Calculate metrics recall_at_1 = (I[:, :1] == gt[:, :1]).sum() / len(xq) avg_query_time_ms = (t1 - t0) * 1000.0 / len(xq) ``` **Important**: `avg_query_time_ms` from **batch queries** is used for scoring. Batch queries benefit from CPU cache and vectorization, typically faster than single queries. ### 4. Calculate Score ```python if avg_query_time_ms > 2.31: score = 0.0 elif recall_at_1 >= 0.9914: score = 100.0 else: recall_range = 0.9914 - 0.7931 recall_proportion = (recall_at_1 - 0.7931) / recall_range score = max(0.0, min(100.0, 100.0 * recall_proportion)) ``` Dataset Details --------------- - **Name**: SIFT1M - **Base vectors**: 1,000,000 vectors of dimension 128 - **Query vectors**: 10,000 vectors - **Ground truth**: Precomputed nearest neighbors (k=1) - **Metric**: L2 (Euclidean distance) - **Vector type**: float32 Runtime Platform ---------------- - **Infrastructure**: Evaluations run on SkyPilot-managed cloud instances (AWS, GCP, or Azure) - **Compute**: CPU-only instances (no GPU required) - **Environment**: Docker containerized execution with Python 3, NumPy ≥1.24, FAISS-CPU ≥1.7.4 Constraints ----------- - **Timeout**: 1 hour for entire evaluation (index construction + queries) - **Memory**: Use reasonable memory (index should fit in RAM) - **Latency constraint**: avg_query_time_ms ≤ 2.31ms - **Recall range**: 0.7931 ≤ recall@1 ≤ 1.0 Strategy Tips ------------- 1. **Aggressive approximation**: Use very low search budgets (IVF nprobe=2-5, HNSW efSearch=50-100) 2. **Batch optimization is key**: Your `search` should handle batch queries efficiently 3. **Accept recall drops**: 80-90% recall is acceptable if latency is met 4. **Leave safety margin**: Target 1.5-2.0ms to avoid edge cases exceeding 2.31ms Example: Simple Baseline ------------------------- ```python import numpy as np class SimpleIndex: def __init__(self, dim: int, **kwargs): self.dim = dim self.xb = None def add(self, xb: np.ndarray) -> None: if self.xb is None: self.xb = xb.copy() else: self.xb = np.vstack([self.xb, xb]) def search(self, xq: np.ndarray, k: int) -> tuple: # Compute all pairwise L2 distances # xq: (nq, dim), xb: (N, dim) # distances: (nq, N) distances = np.sqrt(((xq[:, np.newaxis, :] - self.xb[np.newaxis, :, :]) ** 2).sum(axis=2)) # Get k nearest neighbors indices = np.argpartition(distances, k-1, axis=1)[:, :k] sorted_indices = np.argsort(distances[np.arange(len(xq))[:, None], indices], axis=1) final_indices = indices[np.arange(len(xq))[:, None], sorted_indices] final_distances = distances[np.arange(len(xq))[:, None], final_indices] return final_distances, final_indices ``` **Note**: This baseline achieves perfect recall (100%) but is too slow for large datasets. Use approximate methods like HNSW, IVF, or LSH for better speed-recall tradeoffs. Debugging Tips -------------- - **Test locally**: Use a subset of data (e.g., 10K vectors) for faster iteration - **Verify shapes**: Ensure `search` returns `(nq, k)` shaped arrays - **Check recall calculation**: `(I[:, :1] == gt[:, :1]).sum() / len(xq)` - **Profile latency**: Measure batch vs single query performance separately - **Validate before submit**: Run full 1M dataset locally if possible
{ "dependencies": { "uv_project": "resources" }, "datasets": [ { "type": "local_tar", "path": "resources/sift.tar.gz", "target": "data/sift1M", "expected_glob": "*.fvecs" } ], "runtime": { "timeout_seconds": 3600 }, "tag": "db" }
vdb_pareto/recall80_latency
research
VDB Design Problem - Recall80 Latency Tier =========================================== Problem Setting --------------- Design a Vector Database index optimized for **latency** subject to a **recall constraint**. This tier uses recall-gated scoring: solutions failing to meet the recall threshold receive zero points, while solutions meeting the constraint are scored purely by latency. **Optimization Goal**: Minimize latency within recall constraint $$ \text{score} = \begin{cases} 0 & \text{if } r < r_{\text{gate}} \\ 100 & \text{if } r \geq r_{\text{gate}} \text{ and } t_{\text{query}} \leq t_{\text{min}} \\ 100 \cdot \frac{t_{\text{max}} - t_{\text{query}}}{t_{\text{max}} - t_{\text{min}}} & \text{if } r \geq r_{\text{gate}} \text{ and } t_{\text{min}} < t_{\text{query}} < t_{\text{max}} \\ 0 & \text{if } r \geq r_{\text{gate}} \text{ and } t_{\text{query}} \geq t_{\text{max}} \end{cases} $$ Where: - $r$: Your recall@1 - $t_{\text{query}}$: Your average query latency (ms) - $r_{\text{gate}} = 0.80$ (minimum required recall) - $t_{\text{min}} = 0.0\text{ms}$ (best possible latency) - $t_{\text{max}} = 0.6\text{ms}$ (maximum allowed latency) **Key Insight**: Unlike other tiers, this tier gates on recall and scores on latency. You MUST achieve ≥80% recall, then faster is better. Baseline Performance -------------------- - Recall@1: **0.9914** (99.14%) - Avg query time: **3.85ms** Scoring Examples ---------------- All examples assume recall constraint is met ($r \geq 0.80$): | Recall@1 | Latency | Score Calculation | Score | |----------|---------|-------------------|-------| | 0.85 | 0.00ms | $t \leq t_{\text{min}}$ → max score | **100** | | 0.85 | 0.30ms | $\frac{0.6 - 0.3}{0.6 - 0.0} = 0.50$ | **50** | | 0.82 | 0.50ms | $\frac{0.6 - 0.5}{0.6 - 0.0} = 0.167$ | **16.7** | | 0.90 | 0.10ms | $\frac{0.6 - 0.1}{0.6 - 0.0} = 0.833$ | **83.3** | | **0.75** | 0.20ms | $r < r_{\text{gate}}$ → recall gate fails | **0** | | 0.95 | **0.70ms** | $t \geq t_{\text{max}}$ → latency too high | **0** | **Note**: This is the most aggressive latency requirement (0.6ms max). You must use extreme approximation while maintaining 80% recall. API Specification ----------------- Implement a class with the following interface: ```python import numpy as np from typing import Tuple class YourIndexClass: def __init__(self, dim: int, **kwargs): """ Initialize the index for vectors of dimension `dim`. Args: dim: Vector dimensionality (e.g., 128 for SIFT1M) **kwargs: Optional parameters (e.g., M, ef_construction for HNSW) Example: index = YourIndexClass(dim=128, nlist=256, nprobe=2) """ pass def add(self, xb: np.ndarray) -> None: """ Add vectors to the index. Args: xb: Base vectors, shape (N, dim), dtype float32 Notes: - Can be called multiple times (cumulative) - Must handle large N (e.g., 1,000,000 vectors) Example: index.add(xb) # xb.shape = (1000000, 128) """ pass def search(self, xq: np.ndarray, k: int) -> Tuple[np.ndarray, np.ndarray]: """ Search for k nearest neighbors of query vectors. Args: xq: Query vectors, shape (nq, dim), dtype float32 k: Number of nearest neighbors to return Returns: (distances, indices): - distances: shape (nq, k), dtype float32, L2 distances - indices: shape (nq, k), dtype int64, indices into base vectors Notes: - Must return exactly k neighbors per query - Indices should refer to positions in the vectors passed to add() - Lower distance = more similar Example: D, I = index.search(xq, k=1) # xq.shape = (10000, 128) # D.shape = (10000, 1), I.shape = (10000, 1) """ pass ``` **Implementation Requirements**: - Class can have any name (evaluator auto-discovers classes with `add` and `search` methods) - Must handle SIFT1M dataset: 1M base vectors, 10K queries, 128 dimensions - Your `search` must return tuple `(distances, indices)` with shapes `(nq, k)` - Distances should be L2 (Euclidean) or L2-squared - No need to handle dataset loading - evaluator provides numpy arrays Evaluation Process ------------------ The evaluator follows these steps: ### 1. Load Dataset ```python from faiss.contrib.datasets import DatasetSIFT1M ds = DatasetSIFT1M() xb = ds.get_database() # (1000000, 128) float32 xq = ds.get_queries() # (10000, 128) float32 gt = ds.get_groundtruth() # (10000, 100) int64 - ground truth indices ``` ### 2. Build Index ```python from solution import YourIndexClass # Auto-discovered d = xb.shape[1] # 128 for SIFT1M index = YourIndexClass(d) # Pass dimension as first argument index.add(xb) # Add all 1M base vectors ``` ### 3. Measure Performance (Batch Queries) ```python import time t0 = time.time() D, I = index.search(xq, k=1) # Search all 10K queries at once t1 = time.time() # Calculate metrics recall_at_1 = (I[:, :1] == gt[:, :1]).sum() / len(xq) avg_query_time_ms = (t1 - t0) * 1000.0 / len(xq) ``` **Important**: `avg_query_time_ms` from **batch queries** is used for scoring. Batch queries benefit from CPU cache and vectorization, typically faster than single queries. ### 4. Calculate Score ```python if recall_at_1 < 0.80: score = 0.0 elif avg_query_time_ms <= 0.0: score = 100.0 elif avg_query_time_ms >= 0.6: score = 0.0 else: proportion = (avg_query_time_ms - 0.0) / (0.6 - 0.0) score = 100.0 * (1.0 - proportion) ``` Dataset Details --------------- - **Name**: SIFT1M - **Base vectors**: 1,000,000 vectors of dimension 128 - **Query vectors**: 10,000 vectors - **Ground truth**: Precomputed nearest neighbors (k=1) - **Metric**: L2 (Euclidean distance) - **Vector type**: float32 Runtime Platform ---------------- - **Infrastructure**: Evaluations run on SkyPilot-managed cloud instances (AWS, GCP, or Azure) - **Compute**: CPU-only instances (no GPU required) - **Environment**: Docker containerized execution with Python 3, NumPy ≥1.24, FAISS-CPU ≥1.7.4 Constraints ----------- - **Timeout**: 1 hour for entire evaluation (index construction + queries) - **Memory**: Use reasonable memory (index should fit in RAM) - **Recall constraint**: recall@1 ≥ 0.80 - **Latency range**: 0.0ms ≤ avg_query_time_ms ≤ 0.6ms Strategy Tips ------------- 1. **Meet recall gate first**: Ensure ≥80% recall, otherwise score = 0 2. **Extreme approximation**: Use minimal search budget (IVF nprobe=1-3) 3. **Batch optimization critical**: 0.6ms is extremely tight, every microsecond counts 4. **Trade recall for speed**: 80-85% recall with ultra-low latency is ideal Example: Simple Baseline ------------------------- ```python import numpy as np class SimpleIndex: def __init__(self, dim: int, **kwargs): self.dim = dim self.xb = None def add(self, xb: np.ndarray) -> None: if self.xb is None: self.xb = xb.copy() else: self.xb = np.vstack([self.xb, xb]) def search(self, xq: np.ndarray, k: int) -> tuple: # Compute all pairwise L2 distances # xq: (nq, dim), xb: (N, dim) # distances: (nq, N) distances = np.sqrt(((xq[:, np.newaxis, :] - self.xb[np.newaxis, :, :]) ** 2).sum(axis=2)) # Get k nearest neighbors indices = np.argpartition(distances, k-1, axis=1)[:, :k] sorted_indices = np.argsort(distances[np.arange(len(xq))[:, None], indices], axis=1) final_indices = indices[np.arange(len(xq))[:, None], sorted_indices] final_distances = distances[np.arange(len(xq))[:, None], final_indices] return final_distances, final_indices ``` **Note**: This baseline achieves perfect recall (100%) but is too slow for large datasets. Use approximate methods like HNSW, IVF, or LSH for better speed-recall tradeoffs. Debugging Tips -------------- - **Test locally**: Use a subset of data (e.g., 10K vectors) for faster iteration - **Verify shapes**: Ensure `search` returns `(nq, k)` shaped arrays - **Check recall calculation**: `(I[:, :1] == gt[:, :1]).sum() / len(xq)` - **Profile latency**: Measure batch vs single query performance separately - **Validate before submit**: Run full 1M dataset locally if possible
{ "dependencies": { "uv_project": "resources" }, "datasets": [ { "type": "local_tar", "path": "resources/sift.tar.gz", "target": "data/sift1M", "expected_glob": "*.fvecs" } ], "runtime": { "timeout_seconds": 3600 }, "tag": "db" }
vdb_pareto/recall95_latency
research
VDB Design Problem - Recall95 Latency Tier =========================================== Problem Setting --------------- Design a Vector Database index optimized for **latency** subject to a **high recall constraint**. This tier uses recall-gated scoring: solutions failing to meet the recall threshold receive zero points, while solutions meeting the constraint are scored purely by latency. **Optimization Goal**: Minimize latency within recall constraint $$ \text{score} = \begin{cases} 0 & \text{if } r < r_{\text{gate}} \\ 100 & \text{if } r \geq r_{\text{gate}} \text{ and } t_{\text{query}} \leq t_{\text{min}} \\ 100 \cdot \frac{t_{\text{max}} - t_{\text{query}}}{t_{\text{max}} - t_{\text{min}}} & \text{if } r \geq r_{\text{gate}} \text{ and } t_{\text{min}} < t_{\text{query}} < t_{\text{max}} \\ 0 & \text{if } r \geq r_{\text{gate}} \text{ and } t_{\text{query}} \geq t_{\text{max}} \end{cases} $$ Where: - $r$: Your recall@1 - $t_{\text{query}}$: Your average query latency (ms) - $r_{\text{gate}} = 0.95$ (minimum required recall) - $t_{\text{min}} = 0.0\text{ms}$ (best possible latency) - $t_{\text{max}} = 7.7\text{ms}$ (maximum allowed latency) **Key Insight**: This tier requires high recall (95%), but provides generous latency budget (7.7ms). Focus on recall first, then optimize latency. Baseline Performance -------------------- - Recall@1: **0.9914** (99.14%) - Avg query time: **3.85ms** Scoring Examples ---------------- All examples assume recall constraint is met ($r \geq 0.95$): | Recall@1 | Latency | Score Calculation | Score | |----------|---------|-------------------|-------| | 0.96 | 0.00ms | $t \leq t_{\text{min}}$ → max score | **100** | | 0.96 | 3.85ms | $\frac{7.7 - 3.85}{7.7 - 0.0} = 0.50$ | **50.0** | | 0.97 | 5.00ms | $\frac{7.7 - 5.0}{7.7 - 0.0} = 0.351$ | **35.1** | | 0.98 | 2.00ms | $\frac{7.7 - 2.0}{7.7 - 0.0} = 0.740$ | **74.0** | | **0.94** | 2.00ms | $r < r_{\text{gate}}$ → recall gate fails | **0** | | 0.96 | **8.00ms** | $t \geq t_{\text{max}}$ → latency too high | **0** | **Note**: The 95% recall requirement is strict, but the 7.7ms latency budget is generous, allowing thorough search strategies. API Specification ----------------- Implement a class with the following interface: ```python import numpy as np from typing import Tuple class YourIndexClass: def __init__(self, dim: int, **kwargs): """ Initialize the index for vectors of dimension `dim`. Args: dim: Vector dimensionality (e.g., 128 for SIFT1M) **kwargs: Optional parameters (e.g., M, ef_construction for HNSW) Example: index = YourIndexClass(dim=128, M=64, ef_search=400) """ pass def add(self, xb: np.ndarray) -> None: """ Add vectors to the index. Args: xb: Base vectors, shape (N, dim), dtype float32 Notes: - Can be called multiple times (cumulative) - Must handle large N (e.g., 1,000,000 vectors) Example: index.add(xb) # xb.shape = (1000000, 128) """ pass def search(self, xq: np.ndarray, k: int) -> Tuple[np.ndarray, np.ndarray]: """ Search for k nearest neighbors of query vectors. Args: xq: Query vectors, shape (nq, dim), dtype float32 k: Number of nearest neighbors to return Returns: (distances, indices): - distances: shape (nq, k), dtype float32, L2 distances - indices: shape (nq, k), dtype int64, indices into base vectors Notes: - Must return exactly k neighbors per query - Indices should refer to positions in the vectors passed to add() - Lower distance = more similar Example: D, I = index.search(xq, k=1) # xq.shape = (10000, 128) # D.shape = (10000, 1), I.shape = (10000, 1) """ pass ``` **Implementation Requirements**: - Class can have any name (evaluator auto-discovers classes with `add` and `search` methods) - Must handle SIFT1M dataset: 1M base vectors, 10K queries, 128 dimensions - Your `search` must return tuple `(distances, indices)` with shapes `(nq, k)` - Distances should be L2 (Euclidean) or L2-squared - No need to handle dataset loading - evaluator provides numpy arrays Evaluation Process ------------------ The evaluator follows these steps: ### 1. Load Dataset ```python from faiss.contrib.datasets import DatasetSIFT1M ds = DatasetSIFT1M() xb = ds.get_database() # (1000000, 128) float32 xq = ds.get_queries() # (10000, 128) float32 gt = ds.get_groundtruth() # (10000, 100) int64 - ground truth indices ``` ### 2. Build Index ```python from solution import YourIndexClass # Auto-discovered d = xb.shape[1] # 128 for SIFT1M index = YourIndexClass(d) # Pass dimension as first argument index.add(xb) # Add all 1M base vectors ``` ### 3. Measure Performance (Batch Queries) ```python import time t0 = time.time() D, I = index.search(xq, k=1) # Search all 10K queries at once t1 = time.time() # Calculate metrics recall_at_1 = (I[:, :1] == gt[:, :1]).sum() / len(xq) avg_query_time_ms = (t1 - t0) * 1000.0 / len(xq) ``` **Important**: `avg_query_time_ms` from **batch queries** is used for scoring. Batch queries benefit from CPU cache and vectorization, typically faster than single queries. ### 4. Calculate Score ```python if recall_at_1 < 0.95: score = 0.0 elif avg_query_time_ms <= 0.0: score = 100.0 elif avg_query_time_ms >= 7.7: score = 0.0 else: proportion = (avg_query_time_ms - 0.0) / (7.7 - 0.0) score = 100.0 * (1.0 - proportion) ``` Dataset Details --------------- - **Name**: SIFT1M - **Base vectors**: 1,000,000 vectors of dimension 128 - **Query vectors**: 10,000 vectors - **Ground truth**: Precomputed nearest neighbors (k=1) - **Metric**: L2 (Euclidean distance) - **Vector type**: float32 Runtime Platform ---------------- - **Infrastructure**: Evaluations run on SkyPilot-managed cloud instances (AWS, GCP, or Azure) - **Compute**: CPU-only instances (no GPU required) - **Environment**: Docker containerized execution with Python 3, NumPy ≥1.24, FAISS-CPU ≥1.7.4 Constraints ----------- - **Timeout**: 1 hour for entire evaluation (index construction + queries) - **Memory**: Use reasonable memory (index should fit in RAM) - **Recall constraint**: recall@1 ≥ 0.95 - **Latency range**: 0.0ms ≤ avg_query_time_ms ≤ 7.7ms Strategy Tips ------------- 1. **Meet recall gate first**: Ensure ≥95% recall, otherwise score = 0 2. **Use moderate approximation**: Higher recall requirement means less aggressive approximation 3. **Batch optimization is key**: Your `search` should handle batch queries efficiently 4. **Balance recall and latency**: Aim for 95-99% recall with 3-5ms latency Example: Simple Baseline ------------------------- ```python import numpy as np class SimpleIndex: def __init__(self, dim: int, **kwargs): self.dim = dim self.xb = None def add(self, xb: np.ndarray) -> None: if self.xb is None: self.xb = xb.copy() else: self.xb = np.vstack([self.xb, xb]) def search(self, xq: np.ndarray, k: int) -> tuple: # Compute all pairwise L2 distances # xq: (nq, dim), xb: (N, dim) # distances: (nq, N) distances = np.sqrt(((xq[:, np.newaxis, :] - self.xb[np.newaxis, :, :]) ** 2).sum(axis=2)) # Get k nearest neighbors indices = np.argpartition(distances, k-1, axis=1)[:, :k] sorted_indices = np.argsort(distances[np.arange(len(xq))[:, None], indices], axis=1) final_indices = indices[np.arange(len(xq))[:, None], sorted_indices] final_distances = distances[np.arange(len(xq))[:, None], final_indices] return final_distances, final_indices ``` **Note**: This baseline achieves perfect recall (100%) but is too slow for large datasets. Use approximate methods like HNSW, IVF, or LSH for better speed-recall tradeoffs. Debugging Tips -------------- - **Test locally**: Use a subset of data (e.g., 10K vectors) for faster iteration - **Verify shapes**: Ensure `search` returns `(nq, k)` shaped arrays - **Check recall calculation**: `(I[:, :1] == gt[:, :1]).sum() / len(xq)` - **Profile latency**: Measure batch vs single query performance separately - **Validate before submit**: Run full 1M dataset locally if possible
{ "dependencies": { "uv_project": "resources" }, "datasets": [ { "type": "local_tar", "path": "resources/sift.tar.gz", "target": "data/sift1M", "expected_glob": "*.fvecs" } ], "runtime": { "timeout_seconds": 3600 }, "tag": "db" }
vector_addition/2_20
research
Vector Addition Problem - Medium Vectors (2^20) ================================================ Problem Setting --------------- Design and optimize high-performance Triton kernels for vector addition on GPU with medium vectors (1,048,576 elements). This problem focuses on implementing efficient element-wise addition for typical workloads. The challenge involves optimizing: - **Memory access patterns**: Efficient loading and storing of vector data - **Block sizing**: Optimal block sizes for GPU execution - **Memory bandwidth**: Maximizing throughput for simple arithmetic operations - **Performance benchmarking**: Achieving speedup over PyTorch baseline This variant tests performance on medium vectors (2^20 = 1,048,576 elements = 4 MB per vector). Target ------ - **Primary**: Maximize bandwidth (GB/s) over PyTorch baseline (higher is better) - **Secondary**: Ensure correctness - **Tertiary**: Minimize kernel launch overhead API Specification ----------------- Implement a `Solution` class that returns a Triton kernel implementation: ```python class Solution: def solve(self, spec_path: str = None) -> dict: """ Returns a dict with either: - {"code": "python_code_string"} - {"program_path": "path/to/kernel.py"} """ # Your implementation pass ``` Your kernel implementation must provide: ```python import torch import triton import triton.language as tl def add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor: """ Element-wise addition of two vectors. Args: x: Input tensor of shape (1048576,) y: Input tensor of shape (1048576,) Returns: Output tensor of shape (1048576,) with x + y """ pass ``` API Usage Notes --------------- - The evaluator looks for an `add` function in the module namespace - Function must handle vector size of exactly 1,048,576 elements - Must use Triton JIT compilation for kernel definition - Should optimize for memory bandwidth - Input tensors are guaranteed to be contiguous and same size Scoring (0-100) --------------- Performance is measured against CPU baseline and PyTorch GPU baseline: ``` target = max(2.0 * (pytorch_bandwidth / cpu_bandwidth), 1.0) score = ((custom_bandwidth / cpu_bandwidth - 1.0) / (target - 1.0)) * 100 Where: - custom_bandwidth = your solution's bandwidth - cpu_bandwidth = naive CPU baseline bandwidth - pytorch_bandwidth = PyTorch GPU baseline bandwidth - target = 2x PyTorch performance vs CPU (normalized to custom vs CPU) Score is clamped to [0, 100] range ``` - 0 points = CPU baseline performance (custom/cpu = 1x) - 50 points = Halfway between CPU baseline and 2x PyTorch performance - 100 points = 2x PyTorch GPU performance vs CPU (custom/cpu = 2 * pytorch/cpu) Evaluation Details ------------------ - Tested on vector size: 2^20 = 1,048,576 elements - Performance measured in GB/s (bandwidth) - Correctness verified with tolerance: rtol=1e-5, atol=1e-8 - Performance measured using median execution time across 5 samples - Requires CUDA backend and GPU support
dependencies: uv_project: resources tag: hpc runtime: environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)" docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true
vector_addition/2_24
research
Vector Addition Problem - Large Vectors (2^24) =============================================== Problem Setting --------------- Design and optimize high-performance Triton kernels for vector addition on GPU with large vectors (16,777,216 elements). This problem focuses on implementing efficient element-wise addition for high-throughput workloads. The challenge involves optimizing: - **Memory bandwidth**: Maximizing throughput for large vectors - **Memory access patterns**: Efficient loading and storing of vector data - **Block sizing**: Optimal block sizes for large vectors - **Performance benchmarking**: Achieving speedup over PyTorch baseline This variant tests performance on large vectors (2^24 = 16,777,216 elements = 64 MB per vector). Target ------ - **Primary**: Maximize bandwidth (GB/s) over PyTorch baseline (higher is better) - **Secondary**: Minimize kernel launch overhead - **Tertiary**: Ensure correctness API Specification ----------------- Implement a `Solution` class that returns a Triton kernel implementation: ```python class Solution: def solve(self, spec_path: str = None) -> dict: """ Returns a dict with either: - {"code": "python_code_string"} - {"program_path": "path/to/kernel.py"} """ # Your implementation pass ``` Your kernel implementation must provide: ```python import torch import triton import triton.language as tl def add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor: """ Element-wise addition of two vectors. Args: x: Input tensor of shape (16777216,) y: Input tensor of shape (16777216,) Returns: Output tensor of shape (16777216,) with x + y """ pass ``` API Usage Notes --------------- - The evaluator looks for an `add` function in the module namespace - Function must handle vector size of exactly 16,777,216 elements - Must use Triton JIT compilation for kernel definition - Should optimize for small vector performance and launch overhead - Input tensors are guaranteed to be contiguous and same size Scoring (0-100) --------------- Performance is measured against CPU baseline and PyTorch GPU baseline: ``` target = max(2.0 * (pytorch_bandwidth / cpu_bandwidth), 1.0) score = ((custom_bandwidth / cpu_bandwidth - 1.0) / (target - 1.0)) * 100 Where: - custom_bandwidth = your solution's bandwidth - cpu_bandwidth = naive CPU baseline bandwidth - pytorch_bandwidth = PyTorch GPU baseline bandwidth - target = 2x PyTorch performance vs CPU (normalized to custom vs CPU) Score is clamped to [0, 100] range ``` - 0 points = CPU baseline performance (custom/cpu = 1x) - 50 points = Halfway between CPU baseline and 2x PyTorch performance - 100 points = 2x PyTorch GPU performance vs CPU (custom/cpu = 2 * pytorch/cpu) Evaluation Details ------------------ - Tested on vector size: 2^24 = 16,777,216 elements - Performance measured in GB/s (bandwidth) - Correctness verified with tolerance: rtol=1e-5, atol=1e-8 - Performance measured using median execution time across 5 samples - Requires CUDA backend and GPU support
dependencies: uv_project: resources datasets: [] tag: hpc runtime: docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)"
vector_addition/2_28
research
Vector Addition Problem - Very Large Vectors (2^28) ============================================== Problem Setting --------------- Design and optimize high-performance Triton kernels for vector addition on GPU with very large vectors (268,435,456 elements). This problem focuses on implementing efficient element-wise addition for maximum throughput scenarios. The challenge involves optimizing: - **Memory access patterns**: Efficient loading and storing of large vector data - **Block sizing**: Optimal block sizes for large GPU workloads - **Memory bandwidth**: Maximizing throughput at scale - **Performance benchmarking**: Achieving speedup over PyTorch baseline This variant tests performance on very large vectors (2^28 = 268,435,456 elements = 1 GB per vector). Requires ~3 GB GPU memory total. Target ------ - **Primary**: Maximize bandwidth (GB/s) over PyTorch baseline (higher is better) - **Secondary**: Ensure correctness on large vectors - **Tertiary**: Minimize memory overhead API Specification ----------------- Implement a `Solution` class that returns a Triton kernel implementation: ```python class Solution: def solve(self, spec_path: str = None) -> dict: """ Returns a dict with either: - {"code": "python_code_string"} - {"program_path": "path/to/kernel.py"} """ # Your implementation pass ``` Your kernel implementation must provide: ```python import torch import triton import triton.language as tl def add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor: """ Element-wise addition of two vectors. Args: x: Input tensor of shape (268435456,) y: Input tensor of shape (268435456,) Returns: Output tensor of shape (268435456,) with x + y """ pass ``` API Usage Notes --------------- - The evaluator looks for an `add` function in the module namespace - Function must handle vector size of exactly 268,435,456 elements - Must use Triton JIT compilation for kernel definition - Should optimize for maximum memory bandwidth at scale - Input tensors are guaranteed to be contiguous and same size - May cause OOM on GPUs with less than 3GB memory Scoring (0-100) --------------- Performance is measured against CPU baseline and PyTorch GPU baseline: ``` target = max(2.0 * (pytorch_bandwidth / cpu_bandwidth), 1.0) score = ((custom_bandwidth / cpu_bandwidth - 1.0) / (target - 1.0)) * 100 Where: - custom_bandwidth = your solution's bandwidth - cpu_bandwidth = naive CPU baseline bandwidth - pytorch_bandwidth = PyTorch GPU baseline bandwidth - target = 2x PyTorch performance vs CPU (normalized to custom vs CPU) Score is clamped to [0, 100] range ``` - 0 points = CPU baseline performance (custom/cpu = 1x) - 50 points = Halfway between CPU baseline and 2x PyTorch performance - 100 points = 2x PyTorch GPU performance vs CPU (custom/cpu = 2 * pytorch/cpu) Evaluation Details ------------------ - Tested on vector size: 2^28 = 268,435,456 elements - Performance measured in GB/s (bandwidth) - Correctness verified with tolerance: rtol=1e-5, atol=1e-8 - Performance measured using median execution time across 5 samples - Requires CUDA backend and GPU support - Requires sufficient GPU memory (may OOM on smaller GPUs)
dependencies: uv_project: resources datasets: [] tag: hpc runtime: docker: image: andylizf/triton-tlx:tlx-nv-cu122 gpu: true environment: "Triton 3.2.0 with CUDA 12.2 (triton-tlx image)"