Skip to content

ML-TASA/CLIP-ICM

Repository files navigation

CLIP-ICM

Static Badge Static Badge Stars

This repository provides Pytorch implementation for [ICML2025] Learning Invariant Causal Mechanism from Vision-Language Models.



Figure 1. Overview of CLIP-ICM.

Quick Start

# create env
conda create -n clip-icm python=3.9 -y
conda activate clip-icm

# install deps
pip install -r requirements.txt          

Directory Layout

├── CLIP/               # CLIP model implementation and related files
├── DomainBed/          # Domain generalization benchmark
├── clip_icm.py         # CLIP ICM-related functionality
├── converter_domainbed.py # DomainBed data conversion utilities
├── engine.py           # Training engine
├── imagenet_stubs.py   # ImageNet stubs for testing
├── main.py             # Main entry point for the project
├── README.md           # Project-level README
├── requirements.txt    # Python dependencies for the project
├── utils.py            # Utility functions

Citation

If you find our work and codes useful, please consider citing our paper and star our repository (🥰🎉Thanks!!!):

@inproceedings{songLearningInvariantCausal2025,
  title = {Learning {{Invariant Causal Mechanism}} from {{Vision-Language Models}}},
  booktitle = {Forty-Second {{International Conference}} on {{Machine Learning}}},
  author = {Song, Zeen and Zhao, Siyu and Zhang, Xingyu and Li, Jiangmeng and Zheng, Changwen and Qiang, Wenwen},
  year = {2025},
  month = may,
  urldate = {2025-06-06},
  langid = {english}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published