Skip to content

RadioLLM: Introducing Large Language Model into Cognitive Radio via Hybrid Prompt and Token Reprogrammings

Notifications You must be signed in to change notification settings

SparkZu/RadioLLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 

Repository files navigation

logo

RadioLLM: Introducing Large Language Model into Cognitive Radio via Hybrid Prompt and Token Reprogrammings

Python PyTorch arXiv Dataset GitHub Repo stars

πŸ“š Introduction

RadioLLM, a novel framework that incorporates Hybrid Prompt and Token Reprogramming (HPTR) and a Frequency Attuned Fusion (FAF) module to enhance LLMs for CRT tasks. HPTR enables the integration of radio signal features with expert knowledge, while FAF improves the modeling of high-frequency features critical for precise signal processing. These innovations allow RadioLLM to handle diverse CRT tasks, bridging the gap between LLMs and traditional signal processing methods. Extensive empirical studies on multiple benchmark datasets demonstrate that the proposed RadioLLM achieves superior performance over current baselines.

πŸ”₯ NEWS

  • [2025-02-01] πŸ“ The preprint of the RadioLLM paper is available on arXiv. Check the paper page for more details.

πŸ“… TODO

  • Collect the codes of RadioLLM's classification network and other comparison models.

πŸ’» Requirements

The code is implemented in Python 3.8. The required packages are listed in the requirements.txt file. You can install the required packages by running the following command:

conda create --name radiollm python=3.8
conda activate radiollm
pip install -r requirements.txt
pip install git+https://github.com/huggingface/transformers
pip install git+https://github.com/huggingface/peft

πŸ“– Citation

Please cite the following paper if you use this study in your research:

@misc{chen2025radiollmintroducinglargelanguage,
      title={RadioLLM: Introducing Large Language Model into Cognitive Radio via Hybrid Prompt and Token Reprogrammings}, 
      author={Shuai Chen and Yong Zu and Zhixi Feng and Shuyuan Yang and Mengchang Li and Yue Ma and Jun Liu and Qiukai Pan and Xinlei Zhang and Changjun Sun},
      year={2025},
      eprint={2501.17888},
      archivePrefix={arXiv},
      primaryClass={eess.SP},
      url={https://arxiv.org/abs/2501.17888}, 
}

About

RadioLLM: Introducing Large Language Model into Cognitive Radio via Hybrid Prompt and Token Reprogrammings

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published