OllamaPY is a Python library I did to use the Ollama API for my own projects. Feedback is appreciated.
It's an interface, so it requires you to have Ollama running on a machine. You can download it here.
It works with Python 3.11.4 and Ollama 0.1.10.
I copied parts of the documentation from the Ollama API documentation to describe the functions.
- Install models
import ollama
client = ollama.Ollama()
# Setting a model for your instance will pull it from the ollama model library if it is not already installed
client.setModel("llama2")
# You can check if a model is already installed
print(client.listLocalModels())
- Get generate text
# This function will use the model you set for your instance
output = client.generate("Who are you ?")
# output is the JSON response string
print(output)
# You can also specify another model
output = client.generate("Who are you ?", "mistral:7b")
# output is the JSON response string
print(output)
- Here is a way to extract the text from the JSON response
import json
def extract(generation:str)->str:
text = ""
generation = generation.split("\n")
for line in generation[:-1]:
text += json.loads(line)['response']
return text
-
prompt
: Stringmodel
(optional) : String- return : String or None
Generate a response for a given prompt with a provided model.
Returns the JSON response from Ollama. If the model is not specified, it will use the model you set for your instance.
If the model specified or in your instance is invalid, or if the connection with Ollama fails, it will return None. -
createModel(modelname:str, modelfile:str)
modelName
: Stringmodelfile
: String- return : Boolean
Create a model from a Modelfile.
Returns True if the model was created, False otherwise.
-
- return : String
It will return the JSON of the models that are available locally. If the connection with Ollama fails, it will return
None
.
You can use the extract function to get the list of models, but change['response']
to['models']
. -
model
(optional) : String- return : String or None
Show details about a model including modelfile, template, parameters, license, and system prompt.
If the model is not specified, it will use the model you set for your instance.
If the model specified or in your instance is invalid, or if the connection with Ollama fails, it will return None. -
model
: String
Delete a model and its data.
Returns True if the model was deleted, False otherwise. -
model
: String
Set the model for your instance. It will pull it from the ollama model library if it is not already installed.
-
embeddings(prompt:str, model:str=None, option:dict[str,any]=None)
prompt
: Stringmodel
(optional) : Stringoption
(optional) : dict[str,any]- return : String or None
Copy paste the ollama.py file in your project and import it.
(don't forget to star the repo if you like it, maybe one day it'll have pip support)