Skip to content

Latest commit

 

History

History
36 lines (27 loc) · 1.61 KB

README.md

File metadata and controls

36 lines (27 loc) · 1.61 KB

Selective Amnesia: A Continual Learning Approach for Forgetting in Deep Generative Models

preprint License: MIT Venue:NeurIPS 2023


Figure 1: Qualitative results of our method, Selective Amnesia (SA). SA can be applied to a variety of models, from forgetting textual prompts such as specific celebrities or nudity in text-to-image models to discrete classes in VAEs and diffusion models (DDPM).

This is the official code repository for the NeurIPS 2023 Spotlight paper Selective Amnesia: A Continual Learning Approach for Forgetting in Deep Generative Models.

The code is split into three subfolders, one each for VAE, DDPM and Stable Diffusion experiments. Detailed instructions are included in the respective subfolders.

Contact

If you have any questions regarding the code or the paper, please email Alvin.

BibTeX

If you find this repository or the ideas presented in our paper useful, please consider citing.

@article{heng2023selective,
  title={Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models},
  author={Heng, Alvin and Soh, Harold},
  journal={arXiv preprint arXiv:2305.10120},
  year={2023}
}