Replies: 2 comments 1 reply
-
Due to the license of LLaMA OA does not directly share the model weights. But searching for "oasst", "SFT-6" or "SFT-7" on the HuggingFace hub you might find models which are ready to use. You could for example try SinanAkkoyun/oasst-sft-7-llama-30b. |
Beta Was this translation helpful? Give feedback.
1 reply
-
You might try: https://huggingface.co/timdettmers/guanaco-33b-merged … it was not directly trained by us but also on the OA-Dataset and is a recent model. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am able to run without any issue the inference worker with a model like OA_SFT_Pythia_12Bq_4, but I can't use llama models.
On the OpenAssistant/oasst-sft-6-llama-30b-xor repository on huggingface we can read the following:
Is there somewhere a nice documentation on how to setup the inference worker with a llama models ? Downloading the orginal weights etc..
Beta Was this translation helpful? Give feedback.
All reactions