This is a small node script to interface an Ollama server in the terminal, while also saving responses locally as timestamped JSON files. You can provide flags for different models, including Llava-phi3:latest, Codellama:7b, and Llama3:8b.
The Ollama endpoint present is just for example purposes and not intended for public use, though if you'd like to collaborate on a project with my hardware, feel free to get in touch.
To run the project, follow these steps:
- Install Node.js and npm if you haven't already.
- Clone this repo to your local machine using
git clone
. - Navigate to the cloned repo folder in your terminal or command prompt.
- Change the Ollama endpoint to your own endpoint, or reach out for access to llm.kristiantalley.com.
To interact with Ollama in the termina, follow these steps:
- Open a terminal or command prompt and navigate to the cloned repo folder.
- Navigate to the src/ directory and Run
node ollama_node-interface.js --{modelname} "{your message to the model}"
where{modelname}
is the name of the model you want to use and{your message to the model}
is the message you want to send to the model. - If no
--model
flag is provided, the default model is llava-phi3:latest. - You'll receive an error if no message is provided.
- Responses are saved in the same src project folder.
The following models are currently available for use with this project:
llama-phi3:latest
(default)codellama:7b
llama3:8b
You can specify any of these models using the--model
flag, e.g.,npm start -- --model llava-phi3:latest
.
Currently, the project only supports one prompt at a time. This will be updated in the future. { "input": [1, 2, 3], "model": "llama-phi3:latest" }