This repository contains all the prompts used in the paper No Need for Explanations: LLMs can implicitly learn from mistakes in-context.
Run create_prompt.py to create a new prompt with the following arguments (in this order):
- the desired prompting strategy (
cot,cot_plus,explicit, orimplicit), - a task type (
label_ans,label_step,edit,solve), - a dataset of origin for the few-shot examples (
gsm8k,asdiv,aqua,prm800k), - a math reasoning question
- a pre-generated step-by-step answer (optional, not needed for the
solvetask type).
For example, to create and view a prompt for solving a given question using few-shot examples from the GSM8K dataset and the implicit learning prompting strategy, run:
>> python3 compose_prompt.py implicit solve gsm8k $QUESTIONIf you use this repository, please cite our work:
@misc{alazraki2025llmsimplicitlylearnmistakes,
title={No Need for Explanations: LLMs can implicitly learn from mistakes in-context},
author={Lisa Alazraki and Maximilian Mozes and Jon Ander Campos and Tan Yi-Chern and Marek Rei and Max Bartolo},
year={2025},
eprint={2502.08550},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.08550},
}