Video by PyTorch via YouTube

Why WideEP Inference Needs Data-Parallel-Aware Scheduling – Maroon Ayoub, IBM; Tyler Michael Smith, Red Hat
WideEP—wide expert parallelism fails not because experts are expensive, but because routing ignores where state already lives. In PyTorch LLM serving with vLLM, WideEP fans tokens across many experts while KV caches accumulate unevenly across data-parallel replicas. When routing is unaware of KV placement and per-replica load, requests land on replicas that cannot reuse cache or make progress efficiently and latency spikes as expert fan-out grows.
The fix is not reshaping expert parallelism, but making routing data-parallel aware using signals vLLM already exposes. In this talk, we show how llm-d extends its router to leverage KV-cache locality and load awareness when routing WideEP flows. Rather than treating replicas as interchangeable, the router prefers replicas with warm KV state and available capacity, aligning routing decisions with vLLM’s execution reality and reducing cache fragmentation.
This session walks through how KV-aware, data-parallel routing changes WideEP inference in practice: which signals matter, how routing behavior evolves, and where the gains come from. Attendees leave with a clear mental model for when KV- and load-aware routing unlocks higher throughput.