Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the generate_one_completion #5

Open
xupeng1910 opened this issue Aug 24, 2021 · 1 comment
Open

Question about the generate_one_completion #5

xupeng1910 opened this issue Aug 24, 2021 · 1 comment

Comments

@xupeng1910
Copy link

Hey, I wanna ask one question about this 'generate_one_completion' in your usage part in the README file. What is that? Is that any Python function that has complete logic?

{"task_id": "Corresponding HumanEval task ID", "completion": "Completion only without the prompt"}
We provide example_problem.jsonl and example_solutions.jsonl under data to illustrate the format and help with debugging.

Here is a nearly functional example code (you just have to provide generate_one_completion to make it work) that saves generated completions to samples.jsonl.

@abtExp
Copy link

abtExp commented Oct 27, 2021

@xupeng1910 the generate_one_completion function returns the model output or the completed code without the input prompt to the model. As you can see in the example_solutions.jsonl and example_problem.jsonl, the input "prompt" in example_problem.jsonl is "def return1():\n" and the "completion"s are in example_solutions.jsonl, these will be the predictions from your model which will then be appended with the prompt and will be tested by executing them. So the generate_one_completion takes in one prompt, calls the model and returns the model_prediction

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants