Skip to content

Introducing Filtered Direct Preference Optimization (fDPO) that enhances language model alignment with human preferences by discarding lower-quality samples compared to those generated by the learning model

License

Notifications You must be signed in to change notification settings

CyberAgentAILab/filtered-dpo

Repository files navigation

Filtered Direct Preference Optimization

tl;dr

Introducing Filtered Direct Preference Optimization (fDPO) that enhances language model alignment with human preferences by discarding lower-quality samples compared to those generated by the learning model

Prerequisites

Get Started

To set up your local environment, start by copying the example environment file:

cp .env.example .env

Next, you need to edit the .env file to include your Hugging Face API token. Replace the placeholder value with your actual token:

HF_HUB_TOKEN="your_hugging_face_token_here"

If you do not already have a Hugging Face account or API token, you will need to create an account on Hugging Face and then generate an API token from your account settings.

Once your .env file is set up, apply the configuration to your environment using direnv:

direnv allow .

Installation

poetry install

Obtain Access to Datasets and Models

To use the datasets and models listed below, you must apply for access privileges on their respective Hugging Face repository pages. Please follow the links provided, and on each page, click the “Apply” button to submit your access request. This process is necessary to ensure compliance with the data usage policies and intellectual property rights associated with each resource.

  • Dataset - Follow this link to apply for access to the dataset.
  • Model - Follow this link to apply for access to the model.

Usage

Test training

Execution time of about an hour in the notebook.

bash scripts/test.sh 

Train 160m model

Execution time of several hours using A100 80G

# $seed in {1, 2, 3}
seed=1
bash scripts/160m/fdpo_mix.sh ${seed}

Train 1.4b model

Execution time of about a day using A100 80G

# $seed in {1, 2, 3}
seed=1
bash scripts/1.4b/fdpo_mix.sh ${seed}

Checking Experimental Results

The verification of experiment logs and creation of reports follow the standard of Transformers .

Also, a notebook for reproducing Figure 6 in our paper is provided in notebook

Reference

Morimura, T., Sakamoto, M., Jinnai, Y., Abe, K., and Ariu, K., Filtered Direct Preference Optimization. EMNLP, 2024.

Bibtex:

@inproceedings{morimura-etal-2024-filtered,
    title = "Filtered Direct Preference Optimization",
    author = "Morimura, Tetsuro  and
      Sakamoto, Mitsuki  and
      Jinnai, Yuu  and
      Abe, Kenshi  and
      Ariu, Kaito",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-main.1266",
    pages = "22729--22770",
}

About

Introducing Filtered Direct Preference Optimization (fDPO) that enhances language model alignment with human preferences by discarding lower-quality samples compared to those generated by the learning model

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published