Although large language models (LLMs) have significant potential to advance chemical discovery, current LLMs lack core chemical knowledge, produce unreliable reasoning trajectories, and exhibit suboptimal performance across diverse chemical tasks. To address these challenges, we propose Chem-R, a generalizable Chemical Reasoning model designed to emulate the deliberative processes of chemists. Chem-R is trained through a three-phase framework that progressively builds advanced reasoning capabilities, including: 1) Chemical Foundation Training, which establishes core chemical knowledge. 2) Chemical Reasoning Protocol Distillation, incorporating structured, expert-like reasoning traces to guide systematic and reliable problem solving. 3) Multi-task Group Relative Policy Optimization that optimizes the model for balanced performance across diverse molecular- and reaction-level tasks. This structured pipeline enables Chem-R to achieve state-of-the-art performance on comprehensive benchmarks, surpassing leading large language models, including Gemini-2.5-Pro and DeepSeek-R1, by up to 46% on molecular tasks and 66% on reaction tasks. Meanwhile, Chem-R also consistently outperforms the existing chemical foundation models across both molecular and reaction level tasks. These results highlight Chem-R’s robust generalization, interpretability, and potential as a foundation for next-generation AI-driven chemical discovery.
Chem-R is a general-purpose large model that achieves expert-level chemical reasoning via a three-stage training framework, outperforming existing language and chemistry foundation models on molecular- and reaction-level tasks. This repository contains the Phase 3 (Multi-task GRPO) code of Chem-R.
- Efficient RL Training: Built on the EasyR1 framework, our training flow is optimized for rapid onboarding and straightforward extensibility.
- Chemistry-Focused Multi-task Learning: The core of this repository is its implementation of multi-task GRPO (Generative Relevance Policy Optimization), tailored specifically for a diverse set of chemistry tasks.
First, set up the environment by following the instructions in the EasyR1 repository. Then, install the additional dependencies required for this project:
pip install Levenshtein rouge-score nltk rdkitTo begin training, navigate to the EasyR1 directory and execute the example script:
cd Chem-R/EasyR1
bash examples/llama3.1_8b_chem_multi_task_grpo.shOur multi-task training incorporates the following datasets across several categories:
| Task category | Datasets |
|---|---|
| Name Prediction | PubChem920k |
| Property Prediction | BACE, BBBP, ClinTox, HIV, Tox21 |
| Molecule Design | ChEBI-20 |
| Molecule Captioning | ChEBI-20 |
| Text-based Open Molecule Generation | TOMG-Bench |
| Yield prediction | Buchwald-Hartwig, Suzuki-Miyaura |
| Reagents selection | Suzuki-Miyaura |
| Reaction prediction | USPTO-Mixed |
| Retrosynthesis | USPTO-50k |
To support multi-task learning, we have extended the data format used in EasyR1 by adding a task field. This modification allows for task-specific identification and enables granular accuracy evaluation, facilitating detailed multi-task comparison and analysis.
- This work is built upon the foundational EasyR1 project, which has been further developed and adapted for chemistry-specific applications.
- The design of our task-specific rewards was inspired by and partially adapted from the evaluation metrics established in ChemLLMBench and MolT5.
