_______ _ _ _ ___ _ _
|__ __| | (_) | | / __| | | |
| | | |__ _ _ __ __| | / /_ | |_ _ __ __ _ _ __ __|
| | | '_ \| | '__/ _` | \ \| | __| '__/ _` | '_ \ / _` |
| | | | | | | | | (_| | _\ \ | |_| | | (_| | | | | (_| |
|_| |_| |_|_|_| \__,_| \__/_|\__|_| \__,_|_| |_|\__,_|
_____ _ _ _
/ ____| | | (_)
| (___ | |_ _ _ __| |_ ___
\___ \| __| | | |/ _` | |/ _ \
____) | |_| |_| | (_| | | (_) |
|_____/ \__|\__,_|\__,_|_|\___/
🚀 Crafted with care by Third Strand Studio - Where code meets creativity
This repository contains scripts to automate the installation of Ollama and Open WebUI on macOS Sequoia 15.4.1, optimized for Apple Silicon (M-series) processors.
To install Ollama + Open WebUI, run the following in your terminal:
curl -fsSL https://raw.githubusercontent.com/thirdstrandstudio/ollama-openwebui-osx/main/install_wrapper.sh -o install_wrapper.sh;
chmod +x install_wrapper.sh;
./install_wrapper.sh
This will:
- Verify your system (Apple Silicon, macOS 15.4.1)
- Install Homebrew if needed
- Install Docker, Node.js, Python 3.11
- Install or update Ollama and Open WebUI
- Add CLI aliases:
ollama-start
,ollama-stop
,ollama-status
,ollama-models
, etc.
Or, you can download this repository, navigate to its directory, and run:
chmod +x install_ollama.sh
./install_ollama.sh
- Automatic Installation: Installs Ollama and Open WebUI with optimal configurations
- Apple Silicon Optimized: Special configurations for M-series processors
- Model Management: Easy interface to download, update, and remove models
- Convenient Scripts: Start, stop, and check status with simple commands
- Backup & Restore: Built-in backup functionality with easy restore process
- Shell Aliases: Quick access to common functions through shell aliases
The installer includes a model management tool with access to popular models:
- Mistral Family: mistral, mistral-openorca, mistral-instruct
- Llama 3 Family: llama3, llama3:8b-instruct, llama3:70b
- Code Models: codellama:7b, codellama:13b, codellama:34b
- Multimodal Models: llava, bakllava
- Specialized Models: neural-chat, orca-mini, phi, stablelm-zephyr, gemma:2b, gemma:7b
And support for custom model installation from Ollama's library.
- macOS Sequoia (15.4.1) or later
- Apple Silicon (M-series) processor (M1/M2/M3/M4)
- At least 8GB RAM (16GB+ recommended)
- At least 20GB free disk space (more for multiple models)
- Internet connection
After installation, you can use the following commands:
Once the installation is complete and you've sourced your shell configuration:
ollama-start # Start Ollama and Open WebUI
ollama-stop # Stop Ollama and Open WebUI
ollama-status # Check status of Ollama and Open WebUI
ollama-models # Manage models (download, remove, update)
ollama-backup # Create a backup of Ollama and Open WebUI data
ollama-webui # Open the web interface in your default browser
If you prefer not to use aliases, you can use the scripts directly:
~/ollama-scripts/start-ollama-suite.sh
~/ollama-scripts/stop-ollama-suite.sh
~/ollama-scripts/status-ollama-suite.sh
~/ollama-scripts/manage-models.sh
~/ollama-scripts/backup-ollama.sh
After starting the Ollama suite, access the Open WebUI at:
http://localhost:3000
Run the backup script:
ollama-backup
This creates a timestamped backup folder in your home directory.
To restore from a backup:
-
Navigate to the backup folder:
cd ~/ollama_backup_TIMESTAMP
-
Run the restore script:
./restore.sh
If you encounter issues:
-
Check system status:
ollama-status
-
Restart the services:
ollama-stop ollama-start
-
Check Docker is running if Open WebUI doesn't start
-
For model issues, try removing and reinstalling:
ollama-models
This script is provided under the MIT License. Feel free to modify and distribute as needed.
Note: This installer is not officially affiliated with Ollama or Open WebUI projects.