Skip to content

zeroQiaoba/MERTools

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MERTools

If our project helps you, please give us a star ⭐ on GitHub to support us. 🙏🙏

License

🚀 MER2023

Dataset

To download the dataset, please fill out an EULA on Hugging Face: https://huggingface.co/datasets/MERChallenge/MER2023. It requires participants to use this dataset only for academic research and not to edit or upload samples to the Internet.

Info

The code is available in the directory ./MER2023, and the official website is http://merchallenge.cn/.

MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning
Zheng Lian, Haiyang Sun, Licai Sun, Jinming Zhao, Ye Liu, Bin Liu, Jiangyan Yi, Meng Wang, Erik Cambria, Guoying Zhao, Björn W. Schuller, Jianhua Tao

🗝️ MERBench

The code is available in the directory ./MERBench.

MERBench: A Unified Evaluation Benchmark for Multimodal Emotion Recognition
Zheng Lian, Licai Sun, Yong Ren, Hao Gu, Haiyang Sun, Lan Chen, Bin Liu, Jianhua Tao

🌎 MER2024

Dataset

To download the dataset, please fill out an EULA in https://drive.google.com/file/d/1cXNfKHyJzVXg_7kWSf_nVKtsxIZVa517/view?usp=sharing, and send it to [email protected]. It requires participants to use this dataset only for academic research and not to edit or upload samples to the Internet.

Info

The code is available in the directory ./MER2024, and the official website is https://zeroqiaoba.github.io/MER2024-website/.

MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition
Zheng Lian, Haiyang Sun, Licai Sun, Zhuofan Wen, Siyuan Zhang, Shun Chen, Hao Gu, Jinming Zhao, Ziyang Ma, Xie Chen, Jiangyan Yi, Rui Liu, Kele Xu, Bin Liu, Erik Cambria, Guoying Zhao, Björn W. Schuller, Jianhua Tao

👍 MER2025

Dataset

To download the dataset, please fill out an EULA on Hugging Face: https://huggingface.co/datasets/MERChallenge/MER2025. It requires participants to use this dataset only for academic research and not to edit or upload samples to the Internet.

Info

The code is available in the directory ./MER2025, and the official website is https://zeroqiaoba.github.io/MER2025-website/.

🔒 License

This project is released under the Apache 2.0 license as found in the LICENSE file. The service is a research preview intended for non-commercial use ONLY. Please get in touch with us if you find any potential violations.

📑 Citation

If you find AffectGPT useful for your research and applications, please cite using this BibTeX:

# MER-Caption dataset, MER-Caption+ dataset, AffectGPT Framework
@article{lian2025affectgpt,
  title={AffectGPT: A New Dataset, Model, and Benchmark for Emotion Understanding with Multimodal Large Language Models},
  author={Lian, Zheng and Chen, Haoyu and Chen, Lan and Sun, Haiyang and Sun, Licai and Ren, Yong and Cheng, Zebang and Liu, Bin and Liu, Rui and Peng, Xiaojiang and others},
  journal={arXiv preprint arXiv:2501.16566},
  year={2025}
}

# OV-MERD dataset
@article{lian2024open,
  title={Open-vocabulary Multimodal Emotion Recognition: Dataset, Metric, and Benchmark},
  author={Lian, Zheng and Sun, Haiyang and Sun, Licai and Chen, Lan and Chen, Haoyu and Gu, Hao and Wen, Zhuofan and Chen, Shun and Zhang, Siyuan and Yao, Hailiang and others},
  journal={arXiv preprint arXiv:2410.01495},
  year={2024}
}

# EMER task
@article{lian2023explainable,
  title={Explainable Multimodal Emotion Recognition},
  author={Lian, Zheng and Sun, Haiyang and Sun, Licai and Gu, Hao and Wen, Zhuofan and Zhang, Siyuan and Chen, Shun and Xu, Mingyu and Xu, Ke and Chen, Kang and others},
  journal={arXiv preprint arXiv:2306.15401},
  year={2023}
}

# MER2023 Dataset
@inproceedings{lian2023mer,
  title={Mer 2023: Multi-label learning, modality robustness, and semi-supervised learning},
  author={Lian, Zheng and Sun, Haiyang and Sun, Licai and Chen, Kang and Xu, Mngyu and Wang, Kexin and Xu, Ke and He, Yu and Li, Ying and Zhao, Jinming and others},
  booktitle={Proceedings of the 31st ACM international conference on multimedia},
  pages={9610--9614},
  year={2023}
}

# MER2024 Dataset
@inproceedings{lian2024mer,
  title={Mer 2024: Semi-supervised learning, noise robustness, and open-vocabulary multimodal emotion recognition},
  author={Lian, Zheng and Sun, Haiyang and Sun, Licai and Wen, Zhuofan and Zhang, Siyuan and Chen, Shun and Gu, Hao and Zhao, Jinming and Ma, Ziyang and Chen, Xie and others},
  booktitle={Proceedings of the 2nd International Workshop on Multimodal and Responsible Affective Computing},
  pages={41--48},
  year={2024}
}

# MERBench
@article{lian2024merbench,
  title={Merbench: A unified evaluation benchmark for multimodal emotion recognition},
  author={Lian, Zheng and Sun, Licai and Ren, Yong and Gu, Hao and Sun, Haiyang and Chen, Lan and Liu, Bin and Tao, Jianhua},
  journal={arXiv preprint arXiv:2401.03429},
  year={2024}
}

About

Toolkits for Multimodal Emotion Recognition

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published