new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

Dec 12

SuffixDecoding: Extreme Speculative Decoding for Emerging AI Applications

Speculative decoding is widely adopted to reduce latency in large language model (LLM) inference by leveraging smaller draft models capable of handling diverse user tasks. However, emerging AI applications, such as LLM-based agents, present unique workload characteristics: instead of diverse independent requests, agentic frameworks typically submit repetitive inference requests, such as multi-agent pipelines performing similar subtasks or self-refinement loops iteratively enhancing outputs. These workloads result in long and highly predictable sequences, which current speculative decoding methods do not effectively exploit. To address this gap, we introduce SuffixDecoding, a novel method that utilizes efficient suffix trees to cache long token sequences from prompts and previous outputs. By adaptively speculating more tokens when acceptance likelihood is high and fewer when it is low, SuffixDecoding effectively exploits opportunities for longer speculations while conserving computation when those opportunities are limited. Evaluations on agentic benchmarks, including SWE-Bench and Text-to-SQL, demonstrate that SuffixDecoding achieves speedups of up to 5.3times, outperforming state-of-the-art methods -- 2.8times faster than model-based approaches like EAGLE-2/3 and 1.9times faster than model-free approaches such as Token Recycling. SuffixDecoding is open-sourced at https://github.com/snowflakedb/ArcticInference

  • 4 authors
·
Nov 7, 2024

FLARES IX: The Physical Mechanisms Driving Compact Galaxy Formation and Evolution

In the FLARES (First Light And Reionisation Epoch Simulations) suite of hydrodynamical simulations, we find the high redshift (z>5) intrinsic size-luminosity relation is, surprisingly, negatively sloped. However, after including the effects of dust attenuation we find a positively sloped UV observed size-luminosity relation in good agreement with other simulated and observational studies. In this work, we extend this analysis to probe the underlying physical mechanisms driving the formation and evolution of the compact galaxies driving the negative size-mass/size-luminosity relation. We find the majority of compact galaxies (R_{1/2, star}< 1 pkpc), which drive the negative slope of the size-mass relation, have transitioned from extended to compact sizes via efficient centralised cooling, resulting in high specific star formation rates in their cores. These compact stellar systems are enshrouded by non-star forming gas distributions as much as 100times larger than their stellar counterparts. By comparing with galaxies from the EAGLE simulation suite, we find that these extended gas distributions `turn on' and begin to form stars between z=5 and z=0 leading to increasing sizes, and thus the evolution of the size-mass relation from a negative to a positive slope. This explicitly demonstrates the process of inside-out galaxy formation in which compact bulges form earlier than the surrounding discs.

  • 9 authors
·
Jan 12, 2023

SpecVLM: Fast Speculative Decoding in Vision-Language Models

Speculative decoding is a powerful way to accelerate autoregressive large language models (LLMs), but directly porting it to vision-language models (VLMs) faces unique systems constraints: the prefill stage is dominated by visual tokens whose count scales with image resolution and video length, inflating both compute and memory, especially the key-value (KV) cache. We study speculative decoding for VLMs and introduce SpecVLM, a practical system that (1) establishes a strong EAGLE-2-style baseline, EagleVLM, delivering 1.5--2.3x end-to-end speedups over full autoregressive inference, and (2) further accelerates VLM inference with an elastic visual compressor that adaptively selects among pruning, pooling, convolution, and resampler primitives to balance FLOPs/parameters and accuracy per input. To avoid costly offline distillation corpora, we propose an online-logit distillation protocol that trains the draft model with on-the-fly teacher logits and penultimate features using a combined cross-entropy and Smooth L1 objective, eliminating storage and preprocessing while remaining compute-efficient. This protocol reveals a training-time scaling effect: longer online training monotonically increases the draft model's average accepted length, improving speculative efficiency. Empirically, SpecVLM achieves additional acceleration, culminating in 2.5--2.9x end-to-end speedups within 5 epochs across LLaVA and MMMU, consistently over resolutions and task difficulties, while preserving the target model's output distribution (lossless decoding). Our code is available at https://github.com/haiduo/SpecVLM.

  • 7 authors
·
Sep 15