Skip to content
@NVIDIA-NeMo

NVIDIA-NeMo

NVIDIA NeMo Framework

NeMo Framework is NVIDIA's GPU accelerated, end-to-end training framework for large language models (LLMs), multi-modal models and speech models. It enables seamless scaling of training (both pretraining and post-training) workloads from single GPU to thousand-node clusters for both 🤗Hugging Face, Megatron, and Pytorch models.

This GitHub organization hosts repositories for NeMo's core components and integrations.

Documentation

To learn more about NVIDIA NeMo Framework and all of its component libraries, please refer to the NeMo Framework User Guide, which includes quick start guide, tutorials, model-specific recipes, best practice guides and performance benchmarks.

License

Apache 2.0 licensed with third-party attributions documented in each repository.

Popular repositories Loading

  1. Curator Curator Public

    Scalable data pre processing and curation toolkit for LLMs

    Python 1.1k 161

  2. RL RL Public

    Scalable toolkit for efficient model reinforcement

    Python 593 90

  3. Run Run Public

    A tool to configure, launch and manage your machine learning experiments.

    Python 176 68

  4. Automodel Automodel Public

    Fine-tune any Hugging Face LLM or VLM on day-0 using PyTorch-native features for GPU-accelerated distributed training with superior performance and memory efficiency.

    Python 37 3

  5. Megatron-Bridge Megatron-Bridge Public

    Training library for Megatron-based models

    Python 14 2

  6. Export-Deploy Export-Deploy Public

    A library for exporting models including NeMo and Hugging Face to optimized inference backends, and deploying them for efficient querying

    Python 12 1

Repositories

Showing 9 of 9 repositories

Top languages

Loading…

Most used topics

Loading…