Hi! I am Yafu Li, a researcher at Shanghai AI Lab, currently working under the supervision of Prof. Yu Cheng. I earned my PhD through joint training at Zhejiang University and Westlake University under the guidance of Prof. Yue Zhang.
My research focuses on test-time scaling, trustworthy AI, and machine translation.
β¨ β¨ β¨
We are looking for interns and joint PhD candidates (with THU, PKU, SJTU, FDU, etc.) to work on cutting-edge research in large language models. Our focus areas include zero reinforcement learning (e.g., R1-zero), test-time scaling, and trustworthy AI. If you are interested, please feel free to contact me at [email protected].
-
A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, and Beyond
Paper Link | GitHub Project -
Test-Time Preference Optimization: On-the-Fly Alignment via Iterative Textual Feedback
Paper Link | GitHub Project -
From Drafts to Answers: Unlocking LLM Potential via Aggregation Fine-Tuning
Paper Link | GitHub Project -
Multi-LLM Collaborative Search for Complex Problem Solving
Paper Link
-
MAGE: Machine-generated Text Detection in the Wild
Paper Link | GitHub Project -
Spotting AIβs Touch: Identifying LLM-Paraphrased Spans in Text
Paper Link | GitHub Project -
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Paper Link | GitHub Project
-
Lost in Literalism: How Supervised Training Shapes Translationese in LLMs
Paper Link | GitHub Project -
Explicit Syntactic Guidance for Neural Text Generation
Paper Link | GitHub Project -
Multi-Granularity Optimization for Non-Autoregressive Translation
Paper Link | GitHub Project -
On Compositional Generalization of Neural Machine Translation
Paper Link | GitHub Project