You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, interesting paper and great work! I try to run the code but i encounter the input length problem. When I run the schema linking prompt to get the schema linkings, I find out the length of the entire prompt is too long to feed to the model. Just wondering if I run the code wrongly. Btw, i get this issue on both turbo 3.5 and vicuna-13b
The text was updated successfully, but these errors were encountered:
Hi, thank you so much for your comment on our paper. This due to the small context window of the models you have chosen. The prompts we are using have around 6000 tokens so if you want to keep the prompts as is, you should use models with larger context window like CodeX and GPT4. Other solution is to reduce the size of the prompts.
Hi, interesting paper and great work! I try to run the code but i encounter the input length problem. When I run the schema linking prompt to get the schema linkings, I find out the length of the entire prompt is too long to feed to the model. Just wondering if I run the code wrongly. Btw, i get this issue on both turbo 3.5 and vicuna-13b
The text was updated successfully, but these errors were encountered: