Skip to content

Search for anything using Google, DuckDuckGo, phind.com, Contains AI models, can transcribe yt videos, temporary email and phone number generation, has TTS support, webai (terminal gpt and open interpreter) and offline LLMs

License

Notifications You must be signed in to change notification settings

AnonymousCoderArtist/Webscout

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

WebScout Logo

Webscout

Your All-in-One Python Toolkit for Web Search, AI Interaction, Digital Utilities, and More

Access diverse search engines, cutting-edge AI models, temporary communication tools, media utilities, developer helpers, and powerful CLI interfaces – all through one unified library.

PyPI Version Monthly Downloads Total Downloads Python Version Ask DeepWiki


πŸ“‹ Table of Contents


Important

Webscout supports three types of compatibility:

  • Native Compatibility: Webscout's own native API for maximum flexibility
  • OpenAI Compatibility: Use providers with OpenAI-compatible interfaces
  • Local LLM Compatibility: Run local models with Inferno, an OpenAI-compatible server (now a standalone package)

Choose the approach that best fits your needs! For OpenAI compatibility, check the OpenAI Providers README or see the OpenAI-Compatible API Server section below.

Note

Webscout supports over 90 AI providers including: LLAMA, C4ai, Venice, Copilot, HuggingFaceChat, PerplexityLabs, DeepSeek, WiseCat, GROQ, OPENAI, GEMINI, DeepInfra, Meta, YEPCHAT, TypeGPT, ChatGPTClone, ExaAI, Claude, Anthropic, Cloudflare, AI21, Cerebras, and many more. All providers follow similar usage patterns with consistent interfaces.

Telegram Group Developer Telegram YouTube LinkedIn Instagram Buy Me A Coffee


πŸš€ Features

Search & AI

  • Comprehensive Search: Leverage Google, DuckDuckGo, and Yep for diverse search results
  • AI Powerhouse: Access and interact with various AI models through three compatibility options:
    • Native API: Use Webscout's native interfaces for providers like OpenAI, Cohere, Gemini, and many more
    • OpenAI-Compatible Providers: Seamlessly integrate with various AI providers using standardized OpenAI-compatible interfaces
    • Local LLMs with Inferno: Run local models with an OpenAI-compatible server (now available as a standalone package)
  • AI Search: AI-powered search engines with advanced capabilities

Media & Content Tools

  • YouTube Toolkit: Advanced YouTube video and transcript management with multi-language support
  • Text-to-Speech (TTS): Convert text into natural-sounding speech using multiple AI-powered providers
  • Text-to-Image: Generate high-quality images using a wide range of AI art providers
  • Weather Tools: Retrieve detailed weather information for any location

Developer Tools

  • GitAPI: Powerful GitHub data extraction toolkit without authentication requirements for public data
  • SwiftCLI: A powerful and elegant CLI framework for beautiful command-line interfaces
  • LitPrinter: Styled console output with rich formatting and colors
  • LitLogger: Simplified logging with customizable formats and color schemes
  • LitAgent: Modern user agent generator that keeps your requests undetectable
  • Scout: Advanced web parsing and crawling library with intelligent HTML/XML parsing
  • Inferno: Run local LLMs with an OpenAI-compatible API and interactive CLI (now a standalone package: pip install inferno-llm)
  • GGUF Conversion: Convert and quantize Hugging Face models to GGUF format

Privacy & Utilities

  • Tempmail & Temp Number: Generate temporary email addresses and phone numbers
  • Awesome Prompts: Curated collection of system prompts for specialized AI personas


βš™οΈ Installation

Install Webscout using pip:

pip install -U webscout

πŸ–₯️ Command Line Interface

Webscout provides a powerful command-line interface for quick access to its features:

python -m webscout --help
Web Search Commands

Command Description
python -m webscout answers -k "query" Perform an answers search
python -m webscout images -k "query" Search for images
python -m webscout maps -k "query" Perform a maps search
python -m webscout news -k "query" Search for news articles
python -m webscout suggestions -k "query" Get search suggestions
python -m webscout text -k "query" Perform a text search
python -m webscout translate -k "text" Translate text
python -m webscout version Display the current version
python -m webscout videos -k "query" Search for videos
python -m webscout weather -l "location" Get weather information
python -m webscout google_text -k "query" Perform a text search using Google
python -m webscout google_news -k "query" Search for news using Google
python -m webscout google_suggestions -q "query" Get search suggestions from Google
python -m webscout yep_text -k "query" Perform a text search using Yep
python -m webscout yep_images -k "query" Search for images using Yep
python -m webscout yep_suggestions -q "query" Get search suggestions from Yep

Inferno LLM Commands

Inferno is now a standalone package. Install it separately with:

pip install inferno-llm

After installation, you can use its CLI for managing and using local LLMs:

inferno --help
Command Description
inferno pull <model> Download a model from Hugging Face
inferno list List downloaded models
inferno serve <model> Start a model server with OpenAI-compatible API
inferno run <model> Chat with a model interactively
inferno remove <model> Remove a downloaded model
inferno version Show version information

For more information, visit the Inferno GitHub repository or PyPI package page.

Note

Hardware requirements for running models with Inferno:

  • Around 2 GB of RAM for 1B models
  • Around 4 GB of RAM for 3B models
  • At least 8 GB of RAM for 7B models
  • 16 GB of RAM for 13B models
  • 32 GB of RAM for 33B models
  • GPU acceleration is recommended for better performance
πŸ”„ OpenAI-Compatible API Server

Webscout includes an OpenAI-compatible API server that allows you to use any supported provider with tools and applications designed for OpenAI's API.

Starting the API Server

From Command Line

# Start with default settings (port 8000)
python -m webscout.Provider.OPENAI.api

# Start with custom port
python -m webscout.Provider.OPENAI.api --port 8080

# Start with API key authentication
python -m webscout.Provider.OPENAI.api --api-key "your-secret-key"

# Specify a default provider
python -m webscout.Provider.OPENAI.api --default-provider "Claude"

# Run in debug mode
python -m webscout.Provider.OPENAI.api --debug

From Python Code

# Method 1: Using the helper function
from webscout.Provider.OPENAI.api import start_server

# Start with default settings
start_server()

# Start with custom settings
start_server(port=8080, api_key="your-secret-key", default_provider="Claude")

# Method 2: Using the run_api function for more control
from webscout.Provider.OPENAI.api import run_api

run_api(
    host="0.0.0.0",
    port=8080,
    api_key="your-secret-key",
    default_provider="Claude",
    debug=True
)

Using the API

Once the server is running, you can use it with any OpenAI client library or tool:

# Using the OpenAI Python client
from openai import OpenAI

client = OpenAI(
    api_key="your-secret-key",  # Only needed if you set an API key
    base_url="http://localhost:8000/v1"  # Point to your local server
)

# Chat completion
response = client.chat.completions.create(
    model="gpt-4",  # This can be any model name registered with Webscout
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello, how are you?"}
    ]
)

print(response.choices[0].message.content)

Using with cURL

# Basic chat completion request
curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-secret-key" \
  -d '{
    "model": "gpt-4",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Hello, how are you?"}
    ]
  }'

# List available models
curl http://localhost:8000/v1/models \
  -H "Authorization: Bearer your-secret-key"

Available Endpoints

  • GET /v1/models - List all available models
  • GET /v1/models/{model_name} - Get information about a specific model
  • POST /v1/chat/completions - Create a chat completion


πŸ” Search Engines

Webscout provides multiple search engine interfaces for diverse search capabilities.

YepSearch - Yep.com Interface

from webscout import YepSearch

# Initialize YepSearch
yep = YepSearch(
    timeout=20,  # Optional: Set custom timeout
    proxies=None,  # Optional: Use proxies
    verify=True   # Optional: SSL verification
)

# Text Search
text_results = yep.text(
    keywords="artificial intelligence",
    region="all",           # Optional: Region for results
    safesearch="moderate",  # Optional: "on", "moderate", "off"
    max_results=10          # Optional: Limit number of results
)

# Image Search
image_results = yep.images(
    keywords="nature photography",
    region="all",
    safesearch="moderate",
    max_results=10
)

# Get search suggestions
suggestions = yep.suggestions("hist")

GoogleSearch - Google Interface

from webscout import GoogleSearch

# Initialize GoogleSearch
google = GoogleSearch(
    timeout=10,  # Optional: Set custom timeout
    proxies=None,  # Optional: Use proxies
    verify=True   # Optional: SSL verification
)

# Text Search
text_results = google.text(
    keywords="artificial intelligence",
    region="us",           # Optional: Region for results
    safesearch="moderate",  # Optional: "on", "moderate", "off"
    max_results=10          # Optional: Limit number of results
)
for result in text_results:
    print(f"Title: {result.title}")
    print(f"URL: {result.url}")
    print(f"Description: {result.description}")

# News Search
news_results = google.news(
    keywords="technology trends",
    region="us",
    safesearch="moderate",
    max_results=5
)

# Get search suggestions
suggestions = google.suggestions("how to")

# Legacy usage is still supported
from webscout import search
results = search("Python programming", num_results=5)

πŸ¦† DuckDuckGo Search with WEBS and AsyncWEBS

Webscout provides powerful interfaces to DuckDuckGo's search capabilities through the WEBS and AsyncWEBS classes.

Synchronous Usage with WEBS

from webscout import WEBS

# Use as a context manager for proper resource management
with WEBS() as webs:
    # Simple text search
    results = webs.text("python programming", max_results=5)
    for result in results:
        print(f"Title: {result['title']}\nURL: {result['url']}")

Asynchronous Usage with AsyncWEBS

import asyncio
from webscout import AsyncWEBS

async def search_multiple_terms(search_terms):
    async with AsyncWEBS() as webs:
        # Create tasks for each search term
        tasks = [webs.text(term, max_results=5) for term in search_terms]
        # Run all searches concurrently
        results = await asyncio.gather(*tasks)
        return results

async def main():
    terms = ["python", "javascript", "machine learning"]
    all_results = await search_multiple_terms(terms)

    # Process results
    for i, term_results in enumerate(all_results):
        print(f"Results for '{terms[i]}':\n")
        for result in term_results:
            print(f"- {result['title']}")
        print("\n")

# Run the async function
asyncio.run(main())

Tip

Always use these classes with a context manager (with statement) to ensure proper resource management and cleanup.


πŸ’» WEBS API Reference

The WEBS class provides comprehensive access to DuckDuckGo's search capabilities through a clean, intuitive API.

Available Search Methods

Method Description Example
text() General web search webs.text('python programming')
answers() Instant answers webs.answers('population of france')
images() Image search webs.images('nature photography')
videos() Video search webs.videos('documentary')
news() News articles webs.news('technology')
maps() Location search webs.maps('restaurants', place='new york')
translate() Text translation webs.translate('hello', to='es')
suggestions() Search suggestions webs.suggestions('how to')
weather() Weather information webs.weather('london')
Example: Text Search

from webscout import WEBS

with WEBS() as webs:
    results = webs.text(
        'artificial intelligence',
        region='wt-wt',        # Optional: Region for results
        safesearch='off',      # Optional: 'on', 'moderate', 'off'
        timelimit='y',         # Optional: Time limit ('d'=day, 'w'=week, 'm'=month, 'y'=year)
        max_results=10         # Optional: Limit number of results
    )

    for result in results:
        print(f"Title: {result['title']}")
        print(f"URL: {result['url']}")
        print(f"Description: {result['body']}\n")

Example: News Search with Formatting

from webscout import WEBS
import datetime

def fetch_formatted_news(keywords, timelimit='d', max_results=20):
    """Fetch and format news articles"""
    with WEBS() as webs:
        # Get news results
        news_results = webs.news(
            keywords,
            region="wt-wt",
            safesearch="off",
            timelimit=timelimit,  # 'd'=day, 'w'=week, 'm'=month
            max_results=max_results
        )

        # Format the results
        formatted_news = []
        for i, item in enumerate(news_results, 1):
            # Format the date
            date = datetime.datetime.fromisoformat(item['date']).strftime('%B %d, %Y')

            # Create formatted entry
            entry = f"{i}. {item['title']}\n"
            entry += f"   Published: {date}\n"
            entry += f"   {item['body']}\n"
            entry += f"   URL: {item['url']}\n"

            formatted_news.append(entry)

        return formatted_news

# Example usage
news = fetch_formatted_news('artificial intelligence', timelimit='w', max_results=5)
print('\n'.join(news))

Example: Weather Information

from webscout import WEBS

with WEBS() as webs:
    # Get weather for a location
    weather = webs.weather("New York")

    # Access weather data
    if weather:
        print(f"Location: {weather.get('location', 'Unknown')}")
        print(f"Temperature: {weather.get('temperature', 'N/A')}")
        print(f"Conditions: {weather.get('condition', 'N/A')}")


πŸ€– AI Models and Voices

Webscout provides easy access to a wide range of AI models and voice options.

LLM Models

Access and manage Large Language Models with Webscout's model utilities.

from webscout import model
from rich import print

# List all available LLM models
all_models = model.llm.list()
print(f"Total available models: {len(all_models)}")

# Get a summary of models by provider
summary = model.llm.summary()
print("Models by provider:")
for provider, count in summary.items():
    print(f"  {provider}: {count} models")

# Get models for a specific provider
provider_name = "PerplexityLabs"
available_models = model.llm.get(provider_name)
print(f"\n{provider_name} models:")
if isinstance(available_models, list):
    for i, model_name in enumerate(available_models, 1):
        print(f"  {i}. {model_name}")
else:
    print(f"  {available_models}")

TTS Voices

Access and manage Text-to-Speech voices across multiple providers.

from webscout import model
from rich import print

# List all available TTS voices
all_voices = model.tts.list()
print(f"Total available voices: {len(all_voices)}")

# Get a summary of voices by provider
summary = model.tts.summary()
print("\nVoices by provider:")
for provider, count in summary.items():
    print(f"  {provider}: {count} voices")

# Get voices for a specific provider
provider_name = "ElevenlabsTTS"
available_voices = model.tts.get(provider_name)
print(f"\n{provider_name} voices:")
if isinstance(available_voices, dict):
    for voice_name, voice_id in list(available_voices.items())[:5]:  # Show first 5 voices
        print(f"  - {voice_name}: {voice_id}")
    if len(available_voices) > 5:
        print(f"  ... and {len(available_voices) - 5} more")


πŸ’¬ AI Chat Providers

Webscout offers a comprehensive collection of AI chat providers, giving you access to various language models through a consistent interface.

Popular AI Providers

Provider Description Key Features
OPENAI OpenAI's models GPT-3.5, GPT-4, tool calling
GEMINI Google's Gemini models Web search capabilities
Meta Meta's AI assistant Image generation, web search
GROQ Fast inference platform High-speed inference, tool calling
LLAMA Meta's Llama models Open weights models
DeepInfra Various open models Multiple model options
Cohere Cohere's language models Command models
PerplexityLabs Perplexity AI Web search integration
YEPCHAT Yep.com's AI Streaming responses
ChatGPTClone ChatGPT-like interface Multiple model options
TypeGPT TypeChat models Multiple model options
Example: Using Meta AI

from webscout import Meta

# For basic usage (no authentication required)
meta_ai = Meta()

# Simple text prompt
response = meta_ai.chat("What is the capital of France?")
print(response)

# For authenticated usage with web search and image generation
meta_ai = Meta(fb_email="[email protected]", fb_password="your_password")

# Text prompt with web search
response = meta_ai.ask("What are the latest developments in quantum computing?")
print(response["message"])
print("Sources:", response["sources"])

# Image generation
response = meta_ai.ask("Create an image of a futuristic city")
for media in response.get("media", []):
    print(media["url"])

Example: GROQ with Tool Calling

from webscout import GROQ, WEBS
import json

# Initialize GROQ client
client = GROQ(api_key="your_api_key")

# Define helper functions
def calculate(expression):
    """Evaluate a mathematical expression"""
    try:
        result = eval(expression)
        return json.dumps({"result": result})
    except Exception as e:
        return json.dumps({"error": str(e)})

def search(query):
    """Perform a web search"""
    try:
        results = WEBS().text(query, max_results=3)
        return json.dumps({"results": results})
    except Exception as e:
        return json.dumps({"error": str(e)})

# Register functions with GROQ
client.add_function("calculate", calculate)
client.add_function("search", search)

# Define tool specifications
tools = [
    {
        "type": "function",
        "function": {
            "name": "calculate",
            "description": "Evaluate a mathematical expression",
            "parameters": {
                "type": "object",
                "properties": {
                    "expression": {
                        "type": "string",
                        "description": "The mathematical expression to evaluate"
                    }
                },
                "required": ["expression"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "search",
            "description": "Perform a web search",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "The search query"
                    }
                },
                "required": ["query"]
            }
        }
    }
]

# Use the tools
response = client.chat("What is 25 * 4 + 10?", tools=tools)
print(response)

response = client.chat("Find information about quantum computing", tools=tools)
print(response)

GGUF Model Conversion

Webscout provides tools to convert and quantize Hugging Face models into the GGUF format for offline use.

from webscout.Extra.gguf import ModelConverter

# Create a converter instance
converter = ModelConverter(
    model_id="mistralai/Mistral-7B-Instruct-v0.2",  # Hugging Face model ID
    quantization_methods="q4_k_m"                  # Quantization method
)

# Run the conversion
converter.convert()

Available Quantization Methods

Method Description
fp16 16-bit floating point - maximum accuracy, largest size
q2_k 2-bit quantization (smallest size, lowest accuracy)
q3_k_l 3-bit quantization (large) - balanced for size/accuracy
q3_k_m 3-bit quantization (medium) - good balance for most use cases
q3_k_s 3-bit quantization (small) - optimized for speed
q4_0 4-bit quantization (version 0) - standard 4-bit compression
q4_1 4-bit quantization (version 1) - improved accuracy over q4_0
q4_k_m 4-bit quantization (medium) - balanced for most models
q4_k_s 4-bit quantization (small) - optimized for speed
q5_0 5-bit quantization (version 0) - high accuracy, larger size
q5_1 5-bit quantization (version 1) - improved accuracy over q5_0
q5_k_m 5-bit quantization (medium) - best balance for quality/size
q5_k_s 5-bit quantization (small) - optimized for speed
q6_k 6-bit quantization - highest accuracy, largest size
q8_0 8-bit quantization - maximum accuracy, largest size

Command Line Usage

python -m webscout.Extra.gguf convert -m "mistralai/Mistral-7B-Instruct-v0.2" -q "q4_k_m"


🀝 Contributing

Contributions are welcome! If you'd like to contribute to Webscout, please follow these steps:

  1. Fork the repository
  2. Create a new branch for your feature or bug fix
  3. Make your changes and commit them with descriptive messages
  4. Push your branch to your forked repository
  5. Submit a pull request to the main repository

πŸ™ Acknowledgments

  • All the amazing developers who have contributed to the project
  • The open-source community for their support and inspiration

Made with ❀️ by the Webscout team

About

Search for anything using Google, DuckDuckGo, phind.com, Contains AI models, can transcribe yt videos, temporary email and phone number generation, has TTS support, webai (terminal gpt and open interpreter) and offline LLMs

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%