Skip to content

CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare.

License

Notifications You must be signed in to change notification settings

CAS-SIAT-XinHai/CollectiveSFT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CollectiveSFT

Office Code Repository of CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare.

arXiv Model on HF

🎉 Congratulations! We have achieved an outstanding score on the CMB leaderboard with CollectiveSFT.

Preprocessing

In the preprocess folder, you will find all conversion scripts used to transform datasets into the Alpaca format. Feel free to use them, but please note that you may need to apply for access to some datasets before they can be utilized.

Train

Our training configuration is available in the train folder. You can train the model yourself using the LLaMA-Factory repository. Please install the llamafactory-cli first, then run the following command to start training. If you encounter any issues during the training stage, please refer to the original repository for assistance. Remember to replace the dataset_info.json and ensure all required data is in the data folder before running the train command.

FORCE_TORCHRUN=1 llamafactory-cli train train/collectivesft.yaml

Eval

You can use the CMB repository to generate answers. Follow the setup instructions in the repository to configure the evaluation code. We have provided some useful scripts in the eval folder to help you validate and score the results more quickly than submitting them to the official website.

Citation

If you find our work helpful in your research, please cite the following paper:

@misc{zhu2024collectivesftscalinglargelanguage,
      title={CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare}, 
      author={Jingwei Zhu and Minghuan Tan and Min Yang and Ruixue Li and Hamid Alinejad-Rokny},
      year={2024},
      eprint={2407.19705},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2407.19705}, 
}

About

CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages