Skip to content

PyAIBridge is a high-performance unified API library that provides a consistent interface for interacting with multiple Large Language Model (LLM) providers. It abstracts away the complexity of different provider APIs while offering advanced features like automatic retry logic, cost tracking, metrics collection, and streaming support.

License

Notifications You must be signed in to change notification settings

sixteen-dev/pyaibridge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyAIBridge

High-performance unified API library for all LLM providers with modern Python best practices.

Features

  • 🚀 Unified Interface: Single API for multiple LLM providers
  • ⚡ High Performance: Async/await, connection pooling, HTTP/2 support
  • 🛡️ Robust Error Handling: Comprehensive exception hierarchy
  • 🔄 Smart Retries: Exponential backoff with rate limit respect
  • 📊 Built-in Metrics: Cost tracking, performance monitoring
  • 🌊 Streaming Support: Real-time response streaming
  • đź”’ Type Safety: Full type hints and validation with Pydantic
  • âś… Well Tested: Comprehensive test coverage

Supported Providers

  • 🤖 OpenAI - GPT-4.1, GPT-4o, GPT-4-turbo, GPT-3.5-turbo, O-series reasoning models
  • đź§  Google - Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.0 Flash, Gemini 1.5 series
  • đź”® Anthropic - Claude 3 Haiku, Claude 3 Sonnet, Claude 3 Opus, Claude 3.5 Sonnet
  • 🚀 xAI - Grok Beta, Grok models
  • đź”§ More providers - Cohere, Ollama (coming soon)

Installation

pip install pyaibridge

Quick Start

import asyncio
from pyaibridge import LLMFactory, ChatRequest, Message, MessageRole, ProviderConfig

async def main():
    # Create provider
    config = ProviderConfig(api_key="your-api-key")
    provider = LLMFactory.create_provider("openai", config)
    
    # Create request
    request = ChatRequest(
        messages=[
            Message(role=MessageRole.USER, content="Hello, world!")
        ],
        model="gpt-4.1-mini",
        max_tokens=100,
    )
    
    # Generate response
    async with provider:
        response = await provider.chat(request)
        print(response.content)

asyncio.run(main())

Streaming Example

import asyncio
from pyaibridge import LLMFactory, ChatRequest, Message, MessageRole, ProviderConfig

async def main():
    config = ProviderConfig(api_key="your-api-key")
    provider = LLMFactory.create_provider("openai", config)
    
    request = ChatRequest(
        messages=[Message(role=MessageRole.USER, content="Tell me a story")],
        model="gpt-4.1-mini",
    )
    
    async with provider:
        async for chunk in provider.stream_chat(request):
            if chunk.content:
                print(chunk.content, end="", flush=True)

asyncio.run(main())

Advanced Usage

Error Handling

from pyaibridge import (
    LLMFactory, 
    ProviderConfig,
    AuthenticationError, 
    RateLimitError, 
    ProviderError
)

try:
    config = ProviderConfig(api_key="invalid-key")
    provider = LLMFactory.create_provider("openai", config)
    async with provider:
        response = await provider.chat(request)
except AuthenticationError:
    print("Invalid API key")
except RateLimitError as e:
    print(f"Rate limited. Retry after {e.retry_after} seconds")
except ProviderError as e:
    print(f"Provider error: {e.message}")

Metrics Collection

from pyaibridge.utils.metrics import metrics

# Metrics are automatically collected
config = ProviderConfig(api_key="your-key")
provider = LLMFactory.create_provider("openai", config)
async with provider:
    response = await provider.chat(request)

# Get metrics summary
summary = metrics.get_summary()
print(f"Total requests: {summary['openai']['request_count']}")
print(f"Total cost: ${summary['openai']['total_cost']:.6f}")

Cost Calculation

# Automatic cost calculation
response = await provider.chat(request)
cost = provider.calculate_cost(response.usage.dict(), response.model)
print(f"Cost: ${cost:.6f}")

Configuration

Provider Configuration

config = ProviderConfig(
    api_key="your-api-key",
    base_url="https://api.openai.com/v1",  # Custom base URL
    max_retries=3,                         # Retry attempts
    timeout=30.0,                          # Request timeout
    rate_limit=60,                         # Requests per minute
)
provider = LLMFactory.create_provider("openai", config)

Request Parameters

request = ChatRequest(
    messages=[...],
    model="gpt-4.1-mini",
    max_tokens=1000,
    temperature=0.7,
    top_p=0.9,
    frequency_penalty=0.0,
    presence_penalty=0.0,
    stop=["\\n", "END"],
    user="user-123",
    timeout=60.0,
)

Real-World Examples

Content Generation Platform

from pyaibridge import LLMFactory, Message, MessageRole, ChatRequest, ProviderConfig

async def generate_summary(posts: list) -> str:
    """Generate AI summary of Reddit discussions."""
    config = ProviderConfig(api_key="your-openai-key")
    provider = LLMFactory.create_provider("openai", config)
    
    # Prepare content for summarization
    content = "\\n".join([f"Post: {post.headline}" for post in posts[:10]])
    
    prompt = f"""
    Summarize these discussions in 2-3 sentences:
    {content}
    
    Focus on main sentiment and key themes.
    """
    
    messages = [
        Message(role=MessageRole.SYSTEM, content="You are a financial news summarizer."),
        Message(role=MessageRole.USER, content=prompt)
    ]
    
    request = ChatRequest(
        messages=messages,
        model="gpt-4.1-mini",
        temperature=0.3,
        max_tokens=100
    )
    
    async with provider:
        response = await provider.chat(request)
        return response.content.strip()

Multi-Provider Comparison

async def compare_providers():
    # Setup multiple providers
    openai_config = ProviderConfig(api_key="openai-key")
    google_config = ProviderConfig(api_key="google-key")
    
    openai_provider = LLMFactory.create_provider("openai", openai_config)
    google_provider = LLMFactory.create_provider("google", google_config)
    
    question = "What are the benefits of renewable energy?"
    messages = [Message(role=MessageRole.USER, content=question)]
    
    async with openai_provider, google_provider:
        # OpenAI response
        openai_request = ChatRequest(messages=messages, model="gpt-4.1-mini")
        openai_response = await openai_provider.chat(openai_request)
        
        # Google response
        google_request = ChatRequest(messages=messages, model="gemini-2.5-flash")
        google_response = await google_provider.chat(google_request)
        
        print("OpenAI:", openai_response.content[:100] + "...")
        print("Google:", google_response.content[:100] + "...")

Examples

Check out the examples/ directory for more examples:

  • basic_usage.py - Basic chat completion
  • streaming_example.py - Streaming responses
  • metrics_example.py - Metrics collection
  • multi_provider_comparison.py - Comparing multiple providers
  • google_usage.py - Google Gemini integration
  • openai_latest_models.py - Latest OpenAI models

Development

# Clone repository
git clone https://github.com/sixteen-dev/pyaibridge.git
cd pyaibridge

# Install with development dependencies
uv sync --dev

# Run tests
uv run pytest

# Run linting
uv run ruff check src/
uv run ruff format src/

# Run type checking
uv run mypy src/

Testing and Deployment

Automated Testing

# Run comprehensive package tests
uv run python scripts/test_package.py

# Test with real API keys (optional)
export OPENAI_API_KEY="sk-..."
export GOOGLE_API_KEY="AIza..."
export CLAUDE_API_KEY="sk-ant-..."
export XAI_API_KEY="xai-..."
uv run python scripts/test_real_api.py

Automated Deployment via GitHub Actions

The repository includes automated CI/CD with GitHub Actions:

  • TestPyPI: Auto-deploys on push to develop branch
  • PyPI: Auto-deploys on GitHub release creation
  • Security: Automated security scanning and code quality checks

Setup:

  1. Configure OIDC trusted publishing on PyPI/TestPyPI
  2. Create GitHub environments: pypi, test-pypi, api-testing
  3. No API tokens needed - uses secure OIDC authentication

Deploy to TestPyPI:

git push origin develop

Deploy to PyPI:

gh release create v0.1.3 --title "Release v0.1.3"

See GITHUB_DEPLOYMENT.md for complete setup guide.

Manual Deployment

# Build package
uv build

# Test installation locally
uv pip install dist/pyaibridge-*.whl

# Deploy using scripts
uv run python scripts/deploy_testpypi.py  # TestPyPI
uv run twine upload dist/*                # PyPI

Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Add tests for new functionality
  4. Ensure all tests pass (uv run pytest)
  5. Run linting (uv run ruff check src/)
  6. Submit a pull request

License

MIT License - see LICENSE file for details.

Changelog

0.1.1

  • Added Google Gemini provider support
  • Comprehensive test coverage (48 tests passing)
  • Updated to respx for HTTP mocking
  • Fixed Pydantic v2 compatibility
  • Added extensive documentation with real-world scenarios

0.1.0

  • Initial release with OpenAI provider support
  • Basic chat completion and streaming
  • Error handling and retry logic
  • Metrics collection and cost calculation
  • Type safety with Pydantic models

Documentation

Repository

Support

For questions, issues, or feature requests, please open an issue on GitHub.

About

PyAIBridge is a high-performance unified API library that provides a consistent interface for interacting with multiple Large Language Model (LLM) providers. It abstracts away the complexity of different provider APIs while offering advanced features like automatic retry logic, cost tracking, metrics collection, and streaming support.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published