Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 69.8k 13.3k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 2.7k 387

  3. recipes recipes Public

    Common recipes to run vLLM

    Jupyter Notebook 368 139

  4. speculators speculators Public

    A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM

    Python 225 37

  5. semantic-router semantic-router Public

    System Level Intelligent Router for Mixture-of-Models at Cloud, Data Center and Edge

    Go 3.2k 529

Repositories

Showing 10 of 33 repositories
  • vllm-omni Public

    A framework for efficient model inference with omni-modality models

    vllm-project/vllm-omni’s past year of commit activity
    Python 2,660 Apache-2.0 399 228 (43 issues need help) 144 Updated Feb 8, 2026
  • vllm-daily Public

    vLLM Daily Summarization of Merged PRs

    vllm-project/vllm-daily’s past year of commit activity
    39 3 0 0 Updated Feb 8, 2026
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    vllm-project/vllm’s past year of commit activity
    Python 69,794 Apache-2.0 13,287 1,727 (45 issues need help) 1,583 Updated Feb 8, 2026
  • tpu-inference Public

    TPU inference for vLLM, with unified JAX and PyTorch support.

    vllm-project/tpu-inference’s past year of commit activity
    Python 229 Apache-2.0 96 32 (1 issue needs help) 122 Updated Feb 8, 2026
  • router Public

    A high-performance and light-weight router for vLLM large scale deployment

    vllm-project/router’s past year of commit activity
    Rust 108 Apache-2.0 36 6 9 Updated Feb 8, 2026
  • semantic-router Public

    System Level Intelligent Router for Mixture-of-Models at Cloud, Data Center and Edge

    vllm-project/semantic-router’s past year of commit activity
    Go 3,168 Apache-2.0 529 114 (24 issues need help) 56 Updated Feb 8, 2026
  • flash-attention Public Forked from Dao-AILab/flash-attention

    Fast and memory-efficient exact attention

    vllm-project/flash-attention’s past year of commit activity
    Python 112 BSD-3-Clause 2,372 0 19 Updated Feb 8, 2026
  • guidellm Public

    Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs

    vllm-project/guidellm’s past year of commit activity
    Python 843 Apache-2.0 120 61 (5 issues need help) 19 Updated Feb 8, 2026
  • ci-infra Public

    This repo hosts code for vLLM CI & Performance Benchmark infrastructure.

    vllm-project/ci-infra’s past year of commit activity
    HCL 29 Apache-2.0 57 0 27 Updated Feb 7, 2026
  • vllm-ascend Public

    Community maintained hardware plugin for vLLM on Ascend

    vllm-project/vllm-ascend’s past year of commit activity
    C++ 1,651 Apache-2.0 811 982 (8 issues need help) 192 Updated Feb 6, 2026