Fast, cached, scalable PLECS simulation framework with REST API and Web GUI
PyPLECS automates PLECS (power electronics simulation) with modern software engineering practices and enterprise-grade features:
- 🚀 5x faster batch simulations leveraging PLECS native parallel API
- 💾 100-1000x cache speedup on repeated simulations (hash-based deduplication)
- 🌐 REST API for language-agnostic integration (Python, MATLAB, JavaScript, etc.)
- 📊 Web GUI for real-time monitoring and control
- 🔄 Priority queue with automatic retry logic (CRITICAL/HIGH/NORMAL/LOW)
- 🎯 Smart orchestration batches tasks for optimal CPU utilization
- 📦 Flexible caching with Parquet, HDF5, or CSV storage
PyPLECS is built in two layers:
- Core Layer (
pyplecs.py): Thin wrapper around PLECS XML-RPC + GUI automation - Value-Add Layer: Orchestration, caching, API, web UI - the features that make PLECS scalable
# Install from PyPI (coming soon)
pip install pyplecs
# Or install from source
git clone https://github.com/tinix84/pyplecs.git
cd pyplecs
pip install -e .For detailed installation instructions, see INSTALL.md.
from pyplecs import PlecsServer
# Single simulation
with PlecsServer("model.plecs") as server:
results = server.simulate({"Vi": 12.0, "Vo": 5.0})
print(results["Vo"]) # Output voltage waveform
# Batch parallel simulations (3-5x faster!)
with PlecsServer("model.plecs") as server:
params_list = [
{"Vi": 12.0, "Vo": 5.0},
{"Vi": 24.0, "Vo": 10.0},
{"Vi": 48.0, "Vo": 20.0}
]
results = server.simulate_batch(params_list)
# PLECS parallelizes across CPU cores automatically# Start API server
pyplecs-api
# Submit simulation via curl
curl -X POST http://localhost:8000/simulations \
-H "Content-Type: application/json" \
-d '{
"model_file": "model.plecs",
"parameters": {"Vi": 12.0},
"priority": "HIGH"
}'# Start web interface
pyplecs-gui
# Open browser to http://localhost:5000
# - Submit simulations
# - Monitor queue status
# - View cache statistics
# - Real-time updates via WebSocketMajor refactoring with 39% code reduction and 5x performance improvement:
- Batch parallel API leveraging PLECS native parallelization (
simulate_batch()) - Simplified architecture aligned with PLECS capabilities
- Comprehensive migration guide (MIGRATION.md)
- Modular requirements for minimal installations
- File-based variant generation (
generate_variant_plecs_file,generate_variant_plecs_mdl) GenericConverterPlecsMdlclass (usepathlib.PathandPlecsServerdirectly)ModelVariantclass (useSimulationRequestwith parameters)- Python thread pool workers (redundant with PLECS batch API)
run_sim_with_datastream()→ Usesimulate()load_modelvars()→ Pass parameters directly tosimulate()- Legacy
PlecsServer(sim_path, sim_name)→ UsePlecsServer(model_file)
See MIGRATION.md for detailed upgrade instructions.
| Approach | Time | Speedup |
|---|---|---|
| Sequential (v0.x) | 160s | 1x |
| Custom thread pool (v0.x) | 80s | 2x |
| PLECS batch API (v1.0.0) | ~40s | ~4x |
| With 30% cache hit rate | ~28s | ~5.7x |
- Batch parallelization: PLECS distributes work across CPU cores
- Hash-based cache: Instant retrieval for repeated simulations
- Optimal batching: Orchestrator groups tasks by model file
- Smart retry logic: Failed simulations don't block the queue
- 📖 Installation Guide - Setup, configuration, troubleshooting
- 🔄 Migration Guide v0.x → v1.0.0 - Upgrade instructions
- 🌐 REST API Reference - Endpoints, examples, authentication
- 💻 Web GUI Guide - Features, screenshots, usage
- 🏗️ Architecture Overview - Design decisions, patterns, conventions
- 👥 Contributing Guidelines - Development setup, workflow
- 📋 Changelog - Version history, breaking changes
- 📂 Legacy Variant Generation - Migration reference
- 🧪 Test Suite - Usage patterns and examples
from pyplecs import PlecsServer
with PlecsServer("buck_converter.plecs") as server:
# Sweep input voltage from 12V to 48V
params = [{"Vi": v} for v in range(12, 49, 6)]
results = server.simulate_batch(params)
# Results are automatically cached
# Re-running takes <1s instead of minutes!from pyplecs import SimulationOrchestrator, SimulationRequest, TaskPriority
orchestrator = SimulationOrchestrator()
# Submit critical design iterations with high priority
for params in design_iterations:
request = SimulationRequest(
model_file="converter.plecs",
parameters=params,
priority=TaskPriority.HIGH
)
task_id = await orchestrator.submit_simulation(request)# GitHub Actions example
- name: Run simulation tests
run: |
pyplecs-api &
python scripts/run_sim_tests.py
# Uses REST API for language-agnostic testing// JavaScript client calling PyPLECS REST API
const response = await fetch('http://localhost:8000/simulations', {
method: 'POST',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify({
model_file: 'model.plecs',
parameters: {Vi: 12.0}
})
});
const {task_id} = await response.json();- Hash-based deduplication: SHA256 of model + parameters
- Storage formats: Parquet (default, fast), HDF5 (large data), CSV (compatibility)
- Compression: Snappy, GZIP, LZ4
- Backends: File (implemented), Redis (planned), Memory (planned)
- Priority queue: 4 levels (CRITICAL/HIGH/NORMAL/LOW)
- Batch execution: Groups tasks for parallel submission to PLECS
- Retry logic: Configurable attempts with exponential backoff
- Event callbacks: Monitor task lifecycle (queued → running → completed/failed)
- FastAPI with automatic OpenAPI documentation
- WebSocket for real-time updates
- Endpoints:
POST /simulations- Submit single simulationPOST /simulations/batch- Submit batch simulationsGET /simulations/{task_id}- Check statusGET /cache/stats- Cache statisticsGET /stats- Orchestrator statistics
- Dashboard: Real-time simulation monitoring
- Simulations Page: Submit and track simulations
- Cache Page: View hit rates, clear cache
- Settings Page: Edit configuration (planned)
| Platform | XML-RPC | GUI Automation | Notes |
|---|---|---|---|
| Windows | ✅ | ✅ | Full support |
| macOS | ✅ | ❌ | XML-RPC only |
| Linux/WSL | ✅ | ❌ | XML-RPC only |
Note: GUI automation requires pywinauto (Windows-only). All other features work cross-platform.
- Python 3.8+
- PLECS 4.2+
- Core dependencies (see requirements-core.txt)
- All core requirements
- GUI automation:
pywinauto,psutil(Windows only) - Web features:
fastapi,uvicorn,plotly - Advanced caching:
pyarrow,h5py,redis
See INSTALL.md for detailed requirements and platform-specific instructions.
Configuration is managed via config/default.yml:
plecs:
executable: "C:/Program Files/Plexim/PLECS 4.7 (64 bit)/plecs.exe"
xmlrpc:
host: "localhost"
port: 1080
timeout: 300
orchestration:
max_concurrent_simulations: 4 # Match CPU cores
batch_size: 4
retry_attempts: 3
retry_delay: 5
cache:
enabled: true
directory: "./cache"
storage_format: "parquet" # parquet, hdf5, csv
compression: "snappy" # snappy, gzip, lz4
ttl_seconds: 86400 # 24 hours
api:
host: "0.0.0.0"
port: 8000
webgui:
host: "0.0.0.0"
port: 5000Run pyplecs-setup wizard to auto-detect PLECS and create config.
PyPLECS provides multiple command-line tools:
# Setup wizard
pyplecs-setup
# Start REST API server
pyplecs-api
# Start web GUI
pyplecs-gui
# Start MCP server (future)
pyplecs-mcpgit clone https://github.com/tinix84/pyplecs.git
cd pyplecs
pip install -e .
pip install -r requirements-dev.txt# All tests
pytest
# Specific test categories
pytest tests/test_basic.py # Legacy tests
pytest tests/test_plecs_server_refactored.py # Core API tests
pytest tests/test_orchestrator_batch.py # Orchestration tests
# With coverage
pytest --cov=pyplecs tests/
# Benchmarks
pytest tests/benchmark_batch_speedup.py -v -s# Format code
black pyplecs/
isort pyplecs/
# Lint
flake8 pyplecs/
mypy pyplecs/See CONTRIBUTING.md for detailed development guidelines.
PyPLECS originated in 2019 as a personal automation tool for PLECS simulations. Over time, it evolved into a comprehensive framework with web UI, REST API, and enterprise features.
v1.0.0 (2025) represents a major refactoring that:
- Eliminated 39% of code by leveraging PLECS native capabilities
- Improved performance by 5x through batch parallelization
- Simplified architecture by removing unnecessary abstractions
- Aligned design with PLECS XML-RPC API instead of fighting against it
The lesson: Sometimes the best code is the code you don't write.
- PyPI package distribution
- Enhanced authentication for REST API
- Redis cache backend
- Improved Web GUI settings page
- Remove deprecated methods (breaking change)
- Optimization engine (genetic algorithms, Bayesian optimization)
- Model Context Protocol (MCP) server
- Distributed orchestration across multiple machines
- PLECS component library management
Contributions welcome! See CONTRIBUTING.md for:
- Development setup
- Code style guidelines
- Testing requirements
- Pull request process
MIT © 2020-2025 Riccardo Tinivella
If you use PyPLECS in academic research, please cite:
@software{pyplecs2025,
author = {Tinivella, Riccardo},
title = {PyPLECS: Fast, Cached, Scalable PLECS Simulation Framework},
year = {2025},
version = {1.0.0},
url = {https://github.com/tinix84/pyplecs}
}- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: tinix84@gmail.com
- PLECS by Plexim for the excellent simulation software
- FastAPI for the modern Python web framework
- Claude Code for AI-assisted refactoring and documentation
Happy simulating! 🚀