Skip to content
@NVIDIA

NVIDIA Corporation

Pinned Loading

  1. cuopt cuopt Public

    GPU accelerated decision optimization

    Cuda 794 152

  2. cuopt-examples cuopt-examples Public

    NVIDIA cuOpt examples for decision optimization

    Jupyter Notebook 429 74

  3. open-gpu-kernel-modules open-gpu-kernel-modules Public

    NVIDIA Linux open GPU kernel module source

    C 16.8k 1.6k

  4. aistore aistore Public

    AIStore: scalable storage for AI applications

    Go 1.8k 244

  5. nvidia-container-toolkit nvidia-container-toolkit Public

    Build and run containers leveraging NVIDIA GPUs

    Go 4.2k 502

  6. GenerativeAIExamples GenerativeAIExamples Public

    Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.

    Jupyter Notebook 3.9k 1k

Repositories

Showing 10 of 708 repositories
  • OpenShell Public

    OpenShell is the safe, private runtime for autonomous AI agents.

    NVIDIA/OpenShell’s past year of commit activity
    Rust 4,183 Apache-2.0 423 51 21 Updated Apr 1, 2026
  • ncx-infra-controller-core Public

    NCX Infra Controller - Hardware Lifecycle Management and multitenant networking

    NVIDIA/ncx-infra-controller-core’s past year of commit activity
    Rust 111 Apache-2.0 71 122 (4 issues need help) 50 Updated Apr 1, 2026
  • cuda-quantum Public

    C++ and Python support for the CUDA Quantum programming model for heterogeneous quantum-classical workflows

    NVIDIA/cuda-quantum’s past year of commit activity
    C++ 975 Apache-2.0 354 433 (16 issues need help) 110 Updated Apr 1, 2026
  • dgx-spark-playbooks Public

    Collection of step-by-step playbooks for setting up AI/ML workloads on NVIDIA DGX Spark devices with Blackwell architecture.

    NVIDIA/dgx-spark-playbooks’s past year of commit activity
    Jupyter Notebook 649 Apache-2.0 167 26 17 Updated Apr 1, 2026
  • TensorRT-LLM Public

    TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.

    NVIDIA/TensorRT-LLM’s past year of commit activity
    Python 13,235 2,235 565 665 Updated Apr 1, 2026
  • NeMo-Agent-Toolkit Public

    The NVIDIA NeMo Agent toolkit is an open-source library for efficiently connecting and optimizing teams of AI agents.

    NVIDIA/NeMo-Agent-Toolkit’s past year of commit activity
    Python 2,128 Apache-2.0 592 22 20 Updated Apr 1, 2026
  • doca-platform Public

    DOCA Platform manages provisioning and service orchestration for Bluefield DPUs

    NVIDIA/doca-platform’s past year of commit activity
    Go 82 Apache-2.0 21 0 1 Updated Apr 1, 2026
  • nv-rms-client Public

    NVIDIA Rack Management Service Rust language client crate

    NVIDIA/nv-rms-client’s past year of commit activity
    Rust 4 Apache-2.0 4 0 3 Updated Apr 1, 2026
  • NemoClaw Public

    Run OpenClaw more securely inside NVIDIA OpenShell with managed inference

    NVIDIA/NemoClaw’s past year of commit activity
    JavaScript 17,996 Apache-2.0 2,100 250 (1 issue needs help) 212 Updated Apr 1, 2026
  • NVSentinel Public

    NVSentinel is a cross-platform fault remediation service designed to rapidly remediate runtime node-level issues in GPU-accelerated computing environments

    NVIDIA/NVSentinel’s past year of commit activity
    Go 242 Apache-2.0 59 35 23 Updated Apr 1, 2026

Top languages

Loading…

Most used topics

Loading…