Skip to content

Gollama: Your offline conversational AI companion. An interactive tool for generating creative responses from various models, right in your terminal. Ideal for brainstorming, creative writing, or seeking inspiration.

License

Notifications You must be signed in to change notification settings

Gaurav-Gosain/gollama

Repository files navigation

🤖 Gollama: Ollama in your terminal, Your Offline AI Copilot 🦙

gollama-tui-demo

Gollama is a delightful tool that brings Ollama, your offline conversational AI companion, directly into your terminal. It provides a fun and interactive way to generate responses from various models without needing internet connectivity. Whether you're brainstorming ideas, exploring creative writing, or just looking for inspiration, Gollama is here to assist you.

Gollama

Features

  • Chat TUI with History: Gollama now provides a chat-like TUI experience with a history of previous conversations. Saves previous conversations locally using a SQLite database to continue your conversations later.
  • Interactive Interface: Enjoy a seamless user experience with intuitive interface powered by Bubble Tea.
  • Customizable Prompts: Tailor your prompts to get precisely the responses you need.
  • Multiple Models: Choose from a variety of models to generate responses that suit your requirements.
  • Visual Feedback: Stay engaged with visual cues like spinners and formatted output.
  • Multimodal Support: Gollama now supports multimodal models like Llava
  • Model Installation & Management: Easily install and manage models using the Ollamanager library. Directly integrated with Gollama, refer the Ollama Model Management section for more details.

Getting Started

Prerequisites

  • Ollama installed on your system or a gollama API server accessible from your machine. (Default: http://localhost:11434, optionally can be configured using the OLLAMA_HOST environment variable. Refer the official Ollama Go SDK docs for further information
  • At least one model installed on your Ollama server. You can install models using the ollama pull <model-name> command. To find a list of all available models, check the Ollama Library. You can also use the ollama list command to list all locally installed models.

Installation

You can install Gollama using one of the following methods:

Download the latest release

Grab the latest release from the releases page and extract the archive to a location of your choice.

Install using Go

Note

Prerequisite: Go installed on your system.

You can also install Gollama using the go install command:

go install github.com/gaurav-gosain/gollama@latest

Run using Docker

You can pull the latest docker image from the GitHub Docker Container Registry and run it using the following command:

docker run --net=host -it --rm ghcr.io/gaurav-gosain/gollama:latest

Usage

  1. Run the executable:

    gollama

    or

    /path/to/gollama
  2. Follow the on-screen instructions to interact with Gollama.

Note

Running Gollama with the -h flag will display the list of available flags.

Options

TUI Specific Flags

  -v, --version Prints the version of Gollama
  -m, --manage  manages the installed Ollama models (update/delete installed models)
  -i, --install installs an Ollama model (download and install a model)
  -r, --monitor Monitor the status of running Ollama models

CLI Specific Flags

--model string   Model to use for generation
--prompt string  Prompt to use for generation
--images strings Paths to the image files to attach (png/jpg/jpeg), comma separated

Warning

The responses for multimodal LLMs are slower than the normal models (also depends on the size of the attached image)

Keymaps

Pick a chat screen

Key Description
↑/k Up
↓/j Down
→/l/pgdn Next page
←/h/pgup Previous page
g/home Go to start
G/end Go to end
enter Select chat
q Quit
d Delete chat
ctrl+n New chat
? Toggle extended help

pick-a-chat-screen

Main Chat Screen

Key Description
ctrl+up/k Move view up
ctrl+down/j Move view down
ctrl+u Half page up
ctrl+d Half page down
ctrl+p Previous message
ctrl+n Next message
ctrl+y Copy last response
alt+y Copy highlighted message
ctrl+o Toggle image picker
ctrl+x Remove attachment
ctrl+h Toggle help
ctrl+c Exit chat

Note

The ctrl+o keybinding only works if the selected model is multimodal

main-chat-screen

Modal management screens

Note

The management screens can be chained together. For example, using the flags -imr will run Ollamanager with tabs for installing, managing, and monitoring models.

The following keybindings are common to all modal management screens:

Key Description
? Toggle help menu
↑/k Move up
↓/j Move down
←/h Move left
→/l Move right
enter Pick selected item
/ Filter/fuzzy find items
esc Clear filter
q/ctrl+c Quit
n/tab Switch to the next tab
p/shift+tab Switch to the previous tab

modal-management-screen

Note

The following keybindings are specific to the Manage Models screen/tab:

Key Description
u Update selected model
d Delete selected model

Examples

TUI Chat Mode

gollama-tui-demo

Ollama Model Management

Note

Gollama uses the Ollamanager library to manage models. It provides a convenient way to install, update, and delete models.

ollamanager-demo.mp4

Piped Mode

echo "Once upon a time" | gollama --model="llama3.1" --prompt="prompt goes here"
gollama --model="llama3.1" --prompt="prompt goes here" < input.txt

CLI Mode with Images

Important

Not supported for all models, check if the model is multimodal

gollama --model="llava:latest" \
 --prompt="prompt goes here" \
 --images="path/to/image.png"

Warning

The --model and --prompt flags are mandatory for CLI mode. The --images flag is optional.

Local Development

Run locally using Docker

You can also run Gollama locally using docker:

  1. Clone the repository:

    git clone https://github.com/Gaurav-Gosain/gollama.git
  2. Navigate to the project directory:

    cd gollama
  3. Build the docker image:

    docker build -t gollama .
  4. Run the docker image:

    docker run --net=host -it gollama

Note

The above commands build the docker image with the tag gollama. You can replace gollama with any tag of your choice.

Build from source

If you prefer to build from source, follow these steps:

  1. Clone the repository:

    git clone https://github.com/Gaurav-Gosain/gollama.git
  2. Navigate to the project directory:

    cd gollama
  3. Build the executable:

    go build

Dependencies

Gollama relies on the following third-party packages:

  • ollama: The official Go SDK for ollama.
  • ollamanager: A Go library for installing, managing and monitoring ollama models.
  • bubbletea: A library for building terminal applications using the Model-Update-View pattern.
  • glamour: A markdown rendering library for the terminal.
  • huh: A library for building terminal-based forms.
  • lipgloss: A library for styling text output in the terminal.

Roadmap

  • Implement piped mode for automated usage.
  • Add ability to copy responses to clipboard.
  • GitHub Actions for automated releases.
  • Add support for downloading models directly from Ollama using the rest API.
  • Add support for extracting and copying codeblocks from the generated responses.
  • Add CLI options to interact with the database and perform operations like:
    • Deleting chats
    • Creating a new chat
    • Listing chats
    • Continuing a chat from the CLI

Contribution

Contributions are welcome! Whether you want to add new features, fix bugs, or improve documentation, feel free to open a pull request.

Star History

Star History Chart

GitHub Language Count GitHub Top Language Repo Size GitHub Issues GitHub Closed Issues GitHub Pull Requests GitHub Closed Pull Requests GitHub Contributors GitHub Last Commit GitHub Commit Activity (Week)

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Gollama: Your offline conversational AI companion. An interactive tool for generating creative responses from various models, right in your terminal. Ideal for brainstorming, creative writing, or seeking inspiration.

Topics

Resources

License

Stars

Watchers

Forks

Packages