Python bindings for the C++ port of GPT4All-J model.
Please migrate to
ctransformers
library which supports more models and has more features.
pip install gpt4all-j
Download the model from here.
from gpt4allj import Model
model = Model('/path/to/ggml-gpt4all-j.bin')
print(model.generate('AI is going to'))
If you are getting illegal instruction
error, try using instructions='avx'
or instructions='basic'
:
model = Model('/path/to/ggml-gpt4all-j.bin', instructions='avx')
If it is running slow, try building the C++ library from source. Learn more
model.generate(prompt,
seed=-1,
n_threads=-1,
n_predict=200,
top_k=40,
top_p=0.9,
temp=0.9,
repeat_penalty=1.0,
repeat_last_n=64,
n_batch=8,
reset=True,
callback=None)
If True
, context will be reset. To keep the previous context, use reset=False
.
model.generate('Write code to sort numbers in Python.')
model.generate('Rewrite the code in JavaScript.', reset=False)
If a callback function is passed, it will be called once per each generated token. To stop generating more tokens, return False
inside the callback function.
def callback(token):
print(token)
model.generate('AI is going to', callback=callback)
LangChain is a framework for developing applications powered by language models. A LangChain LLM object for the GPT4All-J model can be created using:
from gpt4allj.langchain import GPT4AllJ
llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin')
print(llm('AI is going to'))
If you are getting illegal instruction
error, try using instructions='avx'
or instructions='basic'
:
llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin', instructions='avx')
It can be used with other LangChain modules:
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer:"""
prompt = PromptTemplate(template=template, input_variables=['question'])
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run('What is AI?'))
llm = GPT4AllJ(model='/path/to/ggml-gpt4all-j.bin',
seed=-1,
n_threads=-1,
n_predict=200,
top_k=40,
top_p=0.9,
temp=0.9,
repeat_penalty=1.0,
repeat_last_n=64,
n_batch=8,
reset=True)
To build the C++ library from source, please see gptj.cpp. Once you have built the shared libraries, you can use them as:
from gpt4allj import Model, load_library
lib = load_library('/path/to/libgptj.so', '/path/to/libggml.so')
model = Model('/path/to/ggml-gpt4all-j.bin', lib=lib)