Skip to content

AITwinMinds/Ollama-in-Google-Colab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

Ollama in Google Colab

This repository provides instructions and code snippets for using Ollama in Google Colab notebooks.

Installation

To install Ollama in your Colab environment, follow these steps:

  1. Run the following command in a code cell to install the required dependencies:

    ! sudo apt-get install -y pciutils
  2. Run the installation script provided by Ollama:

    ! curl https://ollama.ai/install.sh | sh
  3. Import the necessary libraries and define the Ollama function:

    import os
    import threading
    import subprocess
    import requests
    import json
    
    def ollama():
        os.environ['OLLAMA_HOST'] = '0.0.0.0:11434'
        os.environ['OLLAMA_ORIGINS'] = '*'
        subprocess.Popen(["ollama", "serve"])

Usage

Once Ollama is installed, you can use it in your Colab notebook as follows:

  1. Start the Ollama server by running the following code:

    ollama_thread = threading.Thread(target=ollama)
    ollama_thread.start()
  2. Download the Ollama model of your choice. For example, to use the mistral model, execute:

    ! ollama pull mistral
  3. Now, you can interact with Ollama by sending prompts and receiving responses. Here's an example:

    prompt = """
    What is AI?
    Can you explain in three paragraphs?
    """
  4. Then, run the following code to receive the response based on your prompt. Here, stream is set to False, but you can also consider a streaming approach for continuous response printing:

    url = 'http://localhost:11434/api/chat'
    payload = {
        "model": "mistral",
        "temperature": 0.6,
        "stream": False,
        "messages": [
            {"role": "system", "content": "You are an AI assistant!"},
            {"role": "user", "content": prompt}
        ]
    }
    
    response = requests.post(url, json=payload)
    message_str = response.content.decode('utf-8')
    message_dict = json.loads(message_str)
    print(message_dict['message']['content'])

This will send the prompt to the Ollama model and print its response.

License

This content is licensed under the MIT License - see the LICENSE file for details.

Support Us

If you find it helpful, consider supporting us in the following ways:

  • ⭐ Star this repository on GitHub.

  • 🐦 Follow us on X (Twitter): @AITwinMinds

  • 📣 Join our Telegram Channel: AITwinMinds for discussions and announcements.

  • 🎥 Subscribe to our YouTube Channel: AITwinMinds for video tutorials and updates.

  • 📸 Follow us on Instagram: @AITwinMinds

Don't forget to share it with your friends!

Contact

For any inquiries, please contact us at [email protected].

About

This repository provides a guide on how to use Ollama in Google Colab.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •