OmniPilot is your all-in-one AI copilot for Raycast. It connects to top LLMs OpenRouter, Google, OpenAI and any OpenAI-compatible endpoint to help you ask questions, translate text, analyze content, and more β all directly from your desktop. Stay focused, work smarter, and bring AI everywhere you need it.
- Dynamic Configuration Management: Add, edit, and switch between multiple AI providers
- Multiple Provider Support: OpenRouter, OpenAI, Gemini, Anthropic, and any OpenAI-compatible API
- Secure Storage: API keys stored locally and encrypted
- Quick Switching: Change active LLM configuration with one click
- Stream AI responses in real-time
- Customizable prompts
- Copy responses to clipboard
- Translate selected text or manual input
- Smart language detection
- Customizable target languages
- Explain complex concepts or code
- Educational prompts optimized for learning
- Context-aware explanations
- Automatically save all AI interactions
- Search through past conversations
- Copy previous prompts and responses
- Token Usage Tracking: View detailed token consumption (prompt, completion, total tokens) for each command
- Color-coded token display for easy monitoring
- Full Conversational AI: Engage in multi-turn conversations with memory
- Conversation Management: Create, save, and manage multiple chat conversations
- Persistent Memory: AI remembers context throughout the entire conversation
- Token Tracking: Monitor token usage across entire conversations
- Conversation History: Browse and continue previous chat sessions
- Customizable System Prompts: Set AI behavior and personality preferences
- Remove individual entries or clear all
- Track providers, models and tokens used
- OpenRouter: Access to multiple AI models through a single API
- OpenAI: Direct integration with GPT models
- Anthropic: Claude models support
- Google: Access to multiple Gemini models
- Any OpenAI-compatible API
- Install the extension in Raycast
- Run "Manage LLMs" command
- Add your API configuration:
- Choose a provider (OpenRouter, OpenAI, Anthropic, etc.)
- Enter your API key
- Set as active configuration
- Start using any AI command!
- Ask AI (
ask
): Ask questions to AI with streaming responses - Chat (
chat
): Full conversational AI with memory and context - Translate Text (
translate
): Translate text between languages - Explain (
explain
): Get detailed explanations of complex concepts - Command History (
command-history
): View and manage your AI conversation history - Manage LLMs (
manage-llms
): Configure and manage AI provider settings
Built with a modular, type-safe architecture:
- Real-time Streaming: Natural incremental responses
- Multi-Provider Support: Easy switching between AI services
- Type Safety: Full TypeScript support
- Error Handling: Comprehensive error management
- Memory Efficient: Optimized streaming implementation
- Google Gemini Direct API: When using Google Gemini directly (not through OpenRouter), token usage information may not be available in the command history. This is due to Google's API not consistently providing usage data in streaming responses.
- Workaround: Use Google Gemini models through OpenRouter for full token tracking support.
- Other Providers: OpenAI, Anthropic, OpenRouter, and most OpenAI-compatible APIs provide complete token usage information.