Skip to content

jwm4/sdg

 
 

Repository files navigation

Synthetic Data Generation (SDG)

Lint Build Release License

e2e-nvidia-t4-x1.yaml on main e2e-nvidia-l4-x1.yaml on main e2e-nvidia-l40s-x4.yml on main

Python library for Synthetic Data Generation

Introduction

Synthetic Data Generation (SDG) is a process that creates an artificially generated dataset that mimics real data based on provided examples. SDG uses a YAML file containing question-and-answer pairs as input data.

Installing the SDG library

Clone the library and navigate to the repo:

git clone https://github.com/instructlab/sdg
cd sdg

Install the library:

pip install .

Using the library

You can import SDG into your Python files with the following items:

 from instructlab.sdg.generate_data import generate_data
 from instructlab.sdg.utils import GenerateException

Pipelines

A pipeline is a series of steps to execute in order to generate data.

There are three default pipelines shipped in SDG: simple, full, and eval. Each pipeline requires specific hardware specifications

Simple Pipeline

The simple pipeline is designed to be used with quantized Merlinite as the teacher model. It enables basic data generation results on low-end consumer grade hardware, such as laptops and desktops with small or no discrete GPUs.

Full Pipeline

The full pipeline is designed to be used with Mixtral-8x7B-Instruct-v0.1 as the the teacher model, but has also been successfully tested with smaller models such as Mistral-7B-Instruct-v0.2 and even some quantized versions of the two teacher models. This is the preferred data generation pipeline on higher end consumer grade hardware and all enterprise hardware.

Eval Pipeline

The eval pipeline is used to generate MMLU benchmark data that can be used to later evaluate a trained model on your knowledge dataset. It does not generate data for use during model training.

Pipeline architecture

All the pipelines are written in a YAML format and must adhere to a specific schema.

The pipelines that generate data for model training (simple and full pipelines) expect to have three different pipeline configs - one each for knowledge, grounded skills, and freeform skills. They are expected to exist in files called knowledge.yaml, grounded_skills.yaml, and freeform_skills.yaml respectively. For background on the difference in knowledge, grounded skills, and freeform skills, refer to the InstructLab Taxonomy repository.

Repository structure

|-- src/instructlab/ (1)
|-- docs/ (2)
|-- scripts/ (3)
|-- tests/ (4)
  1. Contains the SDG code that interacts with InstructLab.
  2. Contains documentation on various SDG methodologies.
  3. Contains some utility scripts, but not part of any supported API.
  4. Contains all the tests for the SDG repository.

About

Python library for Synthetic Data Generation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.5%
  • Other 1.5%