Quan Chen, Xiong Yang, Rongfeng Lu, Qianyu Zhang, Yu Liu, Xiaofei Zhou, Bolun Zheng*
Hangzhou Dianzi University, Tsinghua University, Jiaxing University
- Part I: WXSOD Dataset
- Part II: Benchmark Results
- Part III: Train and Test
- Part IV: Pre-trained Checkpoints
I am actively seeking academic collaboration. If you’re interested in collaborating or would like to connect, feel free to reach out 😊.
- Email: [email protected]
- WeChat: cq1045333951
- 🌟 Dataset Highlights
- 💾 Dataset Access
- 📁 Dataset Structure
- 🔥 Benchmark Results
- 🛠️ Requirements
- 🚀 Train and Test
- 🤗 Pre-trained Checkpoints
- 🎫 License
- 🙏 Acknowledgments
- 📌 Citation
WXSOD dataset provides a large-scale dataset (14,945 RGB images) for salient object detection under extreme weather conditions. Distinguishing itself from existing RGB-SOD benchmarks, it provides images with diverse degradation patterns and pixel-wise annotations. Our dataset contains:
-
A synthetic training set consisting of 12,891 images, including 8 types of weather noise and a small amount of clean images
-
A composite test set consisting of 1,500 images, including 8 types of weather noise and a small amount of clean images
-
A real test set consisting of 554 images, including 5 types of weather noise
The WXSOD dataset is released in two ways:
| BaiduDisk | Google Drive |
├─ WXSOD_data
| ├── train_sys/
| | └──input/
| | ├── 0001_light.jpg
| | └── ...
| | └── gt/
| | ├── 0001_light.jpg
| | └── ...
| ├── test_sys/
| | └──input/
| | ├── 0004_clean.jpg
| | └── ...
| | └── gt/
| | ├── 0004_clean.jpg
| | └── ...
| ├── test_real/
| | └──input/
| | ├── 0001_dark.jpg
| | └── ...
| | └── gt/
| | ├── 0001_dark.jpg
| | └── ...
The prediction results of 18 methods on WXSOD benchmark are available at Google Drive and BaiduDisk.
Note that the quantitative results are derived from the predicted image at the original resolution, while the MACs is measured on a 384×384 image.
- torch == 2.1.0+cu121
- timm == 1.0.11
- imgaug == 0.4.0
- pysodmetrics == 1.4.2
- Train the WFANet.
sh run.sh
- Generate saliency images based on the weights obtained during the training phase (or the weight we provide).
sh runtest.sh
- Calculate the quantitative values of WFANet's predicted images.
sh runEvaluation.sh
Pre-training weights for PVTV2-b and WFANet need to be downloaded. The pre-trained weights of ResNet18 can be automatically downloaded through Timm. Remember to modify the weight path!
-
The pre-trained backbone PVTV2-b is available at Google Drive and BaiduDisk.
-
The pre-trained WFANet is available at Google Drive and BaiduDisk.
This project is licensed under the Apache 2.0 license.
The scenarios for synthesized data come from:
If you find our repository useful for your research, please consider citing our paper:
@inproceedings{
}