Skip to content

Files

Latest commit

0b8d91f · Feb 19, 2025

History

History
68 lines (45 loc) · 2.82 KB

README.md

File metadata and controls

68 lines (45 loc) · 2.82 KB

Fetch — Efficient Tree Search for LLM Reasoning

Code for the paper Don't Get Lost in the Trees: Streamlining LLM Reasoning by Overcoming Tree Search Exploration Pitfalls


🚀 Setup

Follow the steps below to run our scripts:

📌 Step 1. Setup service of policy, verifier, and embedding model

📚 Policy

We employ vllm for the policy. To start the policy service, run the following command:

python3 -m vllm.entrypoints.openai.api_server --model /path/to/policy/model --port 8000 --dtype float16 --tensor-parallel-size 2 --swap-space 8 --max-model-len 4096

🔍 Verifier

  1. Update your model path in verifier/server.py.
  2. Run the script: bash run.sh ./ 0 inside the verifier directory.

📦 Embedding Model

If you're using state merging, follow these steps:

  1. Update the path in cluster/server_cluster.py.
  2. Run the script: bash run_app.sh ./ 0 inside the cluster directory.

📌 Step 2. Run tree search algorithms

We provide three tree search algorithms: BFS (Best-First Search), Beam Search, and MCTS (Monte Carlo Tree Search).

  1. Specify the input, output file paths, and other parameters in scripts such as beamsearch.py.

  2. Simply execute the corresponding Python script. For instance, to run Beam Search: python3 beamsearch.py


🎯 Tips


📝 Citation

If you find our work useful, please cite our paper:

@misc{wang2025dontlosttreesstreamlining,
      title={Don't GetLost in the Trees: Streamlining LLM Reasoning by Overcoming Tree Search Exploration Pitfalls}, 
      author={Ante Wang and Linfeng Song and Ye Tian and Dian Yu and Haitao Mi and Xiangyu Duan and Zhaopeng Tu and Jinsong Su and Dong Yu},
      year={2025},
      eprint={2502.11183},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.11183}, 
}