๐ฐ๐ท ๐บ๐ธ ๐ฆ๐บ ๐ธ๐ฌ ๐ฏ๐ต ๐ฒ๐ณ ๐จ๐ณ ๐ฒ๐พ ๐ถ๐ฆ ๐ท๐ผ ๐ฐ๐ช ๐ป๐ณ ๐ซ๐ท ๐ฎ๐น ๐ฉ๐ช ๐ฌ๐ง
- Safety Alignment Backfires: Preventing the Re-emergence of Suppressed Concepts in Fine-tuned Text-to-Image Diffusion Models [paper]
Sanghyun Kim, Moonseok Choi, Jinwoo Shin, Juho Lee
arXiv preprint. - Safeguard Text-to-Image Diffusion Models with Human Feedback Inversion [paper] [code]
Sanghyun Kim, Seohyeon Jung, Balhae Kim, Moonseok Choi, Jinwoo Shin, Juho Lee
ECCV 2024 (Acceptance rate: 27.9%) - Slot-Mixup with Subsampling: A Simple Regularization for WSI Classification [arXiv]
Seongho Keum, Sanghyun Kim, Soojeong Lee, Juho Lee
ariXiv preprint. - Towards safe self-distillation of Internet-scale text-to-image diffusion models [paper] [code]
Sanghyun Kim, Seohyeon Jung, Balhae Kim, Moonseok Choi, Jinwoo Shin, Juho Lee
ICML 2023 Workshop on Challenges of Deploying Generative AI - Modeling uplift from observational time-series in continual scenarios [paper] [code] [PMLR]
Sanghyun Kim, Jungwon Choi, NamHee Kim, Jaesung Ryu, Juho Lee
AAAI 2023 Bridge on Continual Casality (oral presentation), 2023 - A simple yet powerful deep active learning with snapshot ensembles [paper] [code]
Seohyeon Jung*, Sanghyun Kim*, Juho Lee
(*: Equal Contribution)
ICLR 2023 (Acceptance rate: 24.3%)