Run Python ML models directly in React and React Native apps - no backend required!
- Multi-Runtime Support: Choose between Pyodide (Python), ONNX Runtime, or TensorFlow.js
- Python in the Browser: Run real Python code client-side using Pyodide WebAssembly
- ONNX Models: High-performance inference with ONNX Runtime Web
- TensorFlow.js Integration: Native TensorFlow.js model support with GPU acceleration
- React Integration: Seamless hooks and components for React apps
- React Native Support: Native bridge for mobile applications
- Offline-First: No internet required after initial model load
- Easy Bundling: CLI tools for model packaging and deployment
- TypeScript: Full TypeScript support for better DX
- Web Workers: Non-blocking Python execution
- Installation
- Quick Start
- Runtime Selection
- Packages
- Usage
- API Reference
- Examples
- Contributing
- License
npm install @python-react-ml/core @python-react-ml/reactnpm install @python-react-ml/core @python-react-ml/react-native
# iOS additional setup
cd ios && pod installnpm install -g @python-react-ml/cliinstall.mp4
# model.py
import numpy as np
def predict(input_data):
"""Main prediction function"""
# Your ML model logic here
features = np.array(input_data)
# Simple linear model example
result = np.sum(features) * 0.5
return float(result)
def get_model_info():
"""Optional: Return model metadata"""
return {
"name": "My Model",
"version": "1.0.0",
"type": "regression"
}python-react-ml bundle model.py -o my-model.bundle.zipimport { useModel } from '@python-react-ml/react';
function MyApp() {
const { model, status, predict, error } = useModel('/my-model.bundle.zip');
const handlePredict = async () => {
if (model) {
const result = await predict([1.0, 2.0, 3.0]);
console.log('Prediction:', result);
}
};
if (status === 'loading') return <div>Loading model...</div>;
if (status === 'error') return <div>Error: {error}</div>;
return (
<div>
<h1>Python ML in React!</h1>
<button onClick={handlePredict} disabled={status !== 'ready'}>
Run Prediction
</button>
</div>
);
}Python React ML supports multiple runtime engines to optimize for different model types and performance requirements:
Best for: Custom Python models, data preprocessing, complex ML pipelines
- Full Python environment with NumPy, SciPy, scikit-learn, and more
- Direct Python code execution in the browser via WebAssembly
- Support for custom Python packages and dependencies
- Ideal for models with complex preprocessing or custom logic
import { useModel } from '@python-react-ml/react';
function PyodideExample() {
const { model, predict, status } = useModel('/python-model.bundle.zip', {
runtime: 'pyodide' // Default runtime
});
// Your Python model.py will be executed directly
return <div>Python Model Status: {status}</div>;
}Best for: High-performance inference, production models, cross-platform compatibility
- Optimized C++ inference engine compiled to WebAssembly
- Support for models exported from PyTorch, TensorFlow, scikit-learn
- Fastest inference performance with smaller bundle sizes
- Hardware acceleration support (GPU via WebGL)
import { useModel } from '@python-react-ml/react';
function ONNXExample() {
const { model, predict, status } = useModel('/model.onnx', {
runtime: 'onnx'
});
const handlePredict = async () => {
// Input as tensors - matches your ONNX model's input specification
const result = await predict({
input: new Float32Array([1.0, 2.0, 3.0, 4.0])
});
console.log('ONNX Result:', result);
};
return (
<div>
<p>ONNX Model Status: {status}</p>
<button onClick={handlePredict}>Run Inference</button>
</div>
);
}Best for: TensorFlow models, neural networks, GPU acceleration
- Native TensorFlow.js execution with full ecosystem support
- Excellent GPU acceleration via WebGL backend
- Support for both SavedModel and GraphModel formats
- Ideal for neural networks and deep learning models
import { useModel } from '@python-react-ml/react';
function TensorFlowExample() {
const { model, predict, status } = useModel('/tfjs-model/', {
runtime: 'tfjs',
tfjsBackend: 'webgl' // Use GPU acceleration
});
const handlePredict = async () => {
// Input as tensor-compatible arrays
const result = await predict([
[[1.0, 2.0], [3.0, 4.0]] // Shape depends on your model
]);
console.log('TensorFlow.js Result:', result);
};
return (
<div>
<p>TensorFlow.js Model Status: {status}</p>
<button onClick={handlePredict}>Run Inference</button>
</div>
);
}| Runtime | Performance | Bundle Size | GPU Support | Model Types |
|---|---|---|---|---|
| Pyodide | Good | Large (~10MB+) | No | Python scripts, Custom ML |
| ONNX | Excellent | Small (~2-5MB) | Yes (WebGL) | Standard ML models |
| TensorFlow.js | Excellent | Medium (~5-8MB) | Yes (WebGL/WebGPU) | Neural networks, Deep learning |
- Development/Prototyping: Start with Pyodide for maximum flexibility
- Production Performance: Use ONNX for fastest inference with smallest bundles
- TensorFlow Models: Use TensorFlow.js for native TF model support and GPU acceleration
- Complex Pipelines: Use Pyodide when you need full Python ecosystem features
| Package | Description | Size |
|---|---|---|
@python-react-ml/core |
Core Python execution engine | |
@python-react-ml/react |
React hooks and components | |
@python-react-ml/react-native |
React Native native bridge | |
@python-react-ml/cli |
CLI tools for bundling |
If you're upgrading from a previous version, here's what changed:
import { useModel } from '@python-react-ml/react';
function MyApp() {
// Only Pyodide runtime was available
const { model, predict } = useModel('/model.bundle.zip');
// ...
}import { useModel } from '@python-react-ml/react';
function MyApp() {
// Now you can specify runtime - Pyodide is still the default
const { model, predict } = useModel('/model.bundle.zip', {
runtime: 'pyodide' // Explicit but optional for backward compatibility
});
// Or use new runtime engines for better performance
const { model: onnxModel } = useModel('/model.onnx', {
runtime: 'onnx' // High-performance ONNX Runtime
});
const { model: tfModel } = useModel('/tfjs-model/', {
runtime: 'tfjs', // TensorFlow.js with GPU acceleration
tfjsBackend: 'webgl'
});
// ...
}- None for existing Pyodide users: Existing code continues to work unchanged
- New dependencies: ONNX Runtime Web and TensorFlow.js are now included (but tree-shaken if unused)
- Bundle format: New model formats supported (.onnx files, TensorFlow.js model directories)
- Runtime selection: Choose between
pyodide,onnx, ortfjs - Performance options: GPU acceleration, optimized inference engines
- Model format support: ONNX (.onnx), TensorFlow.js (model.json + weights), Python bundles (.zip)
import { useModel } from '@python-react-ml/react';
function ModelComponent() {
const { model, status, predict, reload } = useModel('/path/to/model.bundle.zip', {
runtime: 'pyodide' // or 'onnx', 'tfjs'
});
return (
<div>
<p>Status: {status}</p>
{status === 'ready' && (
<button onClick={() => predict([1, 2, 3])}>
Predict
</button>
)}
</div>
);
}import { useModel } from '@python-react-ml/react';
// Python model with Pyodide
function PythonModel() {
const { model, predict, status } = useModel('/python-model.bundle.zip', {
runtime: 'pyodide'
});
const handlePredict = async () => {
const result = await predict([1.0, 2.0, 3.0]);
console.log('Python result:', result);
};
return <button onClick={handlePredict}>Run Python Model</button>;
}
// ONNX model for high performance
function ONNXModel() {
const { model, predict, status } = useModel('/model.onnx', {
runtime: 'onnx'
});
const handlePredict = async () => {
const result = await predict({
input: new Float32Array([1.0, 2.0, 3.0, 4.0])
});
console.log('ONNX result:', result);
};
return <button onClick={handlePredict}>Run ONNX Model</button>;
}
// TensorFlow.js model with GPU acceleration
function TensorFlowModel() {
const { model, predict, status } = useModel('/tfjs-model/', {
runtime: 'tfjs',
tfjsBackend: 'webgl'
});
const handlePredict = async () => {
const result = await predict([
[[0.1, 0.2], [0.3, 0.4]]
]);
console.log('TensorFlow.js result:', result);
};
return <button onClick={handlePredict}>Run TF.js Model</button>;
}import { PythonModelProvider, useModelContext } from '@python-react-ml/react';
function App() {
return (
<PythonModelProvider pyodideUrl="https://cdn.jsdelivr.net/pyodide/v0.24.1/full/pyodide.js">
<ModelComponent />
</PythonModelProvider>
);
}
function ModelComponent() {
const { loadModel, isInitialized } = useModelContext();
// Use context...
}import { ModelLoader } from '@python-react-ml/react';
function App() {
return (
<ModelLoader
modelUrl="/model.bundle.zip"
onLoad={(model) => console.log('Model loaded!', model)}
onError={(error) => console.error('Load failed:', error)}
>
{({ status, model, predict }) => (
<div>
<p>Status: {status}</p>
{status === 'ready' && (
<button onClick={() => predict([1, 2, 3])}>
Predict
</button>
)}
</div>
)}
</ModelLoader>
);
}import { useModelNative } from '@python-react-ml/react-native';
function ModelScreen() {
const {
isLoaded,
isLoading,
predict,
error
} = useModelNative('/path/to/model.bundle.zip');
const handlePredict = async () => {
try {
const result = await predict([1.0, 2.0, 3.0]);
Alert.alert('Result', `Prediction: ${result}`);
} catch (err) {
Alert.alert('Error', err.message);
}
};
return (
<View style={styles.container}>
<Text>Model Status: {isLoading ? 'Loading...' : isLoaded ? 'Ready' : 'Not loaded'}</Text>
{error && <Text style={styles.error}>Error: {error}</Text>}
<TouchableOpacity
onPress={handlePredict}
disabled={!isLoaded}
style={styles.button}
>
<Text>Run Prediction</Text>
</TouchableOpacity>
</View>
);
}python-react-ml init my-project
cd my-projectpython-react-ml validate model.py# Basic bundling
python-react-ml bundle model.py
# Advanced options
python-react-ml bundle model.py \
--output my-model.bundle.zip \
--name "My Awesome Model" \
--version "1.2.0" \
--include data.pkl requirements.txt \
--deps numpy pandas scikit-learnMain class for Python execution.
class PythonReactML {
constructor(options: PythonEngineOptions)
loadModelFromBundle(url: string): Promise<PythonModel>
loadModelFromFile(filePath: string): Promise<PythonModel>
cleanup(): Promise<void>
}Represents a loaded Python model.
interface PythonModel {
manifest: PythonModelManifest;
predict: (input: any) => Promise<any>;
getInfo?: () => Promise<any>;
cleanup?: () => void;
}Hook for loading and using a Python model.
interface UseModelResult {
model: PythonModel | null;
status: 'idle' | 'loading' | 'ready' | 'error';
error: string | null;
predict: (input: any) => Promise<any>;
reload: () => Promise<void>;
}Hook for managing the Python engine.
interface PythonEngineState {
engine: PythonReactML | null;
isInitialized: boolean;
isLoading: boolean;
error: string | null;
}python-react-ml init [name]- Initialize new projectpython-react-ml bundle <entry>- Bundle Python modelpython-react-ml validate <entry>- Validate model code
Check out our examples directory for complete sample applications:
- React Web App - Complete web application
- React Native App - Mobile application
- Advanced Models - Complex ML models
- Python Code: Write your ML model in Python with a
predict()function - Bundling: CLI tools package your Python code and dependencies into a ZIP bundle
- Runtime: In the browser, Pyodide (Python compiled to WebAssembly) executes your code
- React Integration: Hooks and components provide seamless integration
graph LR
A[Python Model] --> B[CLI Bundle]
B --> C[ZIP Bundle]
C --> D[React App]
D --> E[Pyodide/WebWorker]
E --> F[Python Execution]
We welcome contributions! Please see our Contributing Guide for details.
git clone https://github.com/yourusername/python-react-ml.git
cd python-react-ml
npm install
npm run buildnpm testnpm run build
npm run publish:all- Web: Limited to Pyodide-compatible packages (most popular ML libraries supported)
- File Size: Bundles can be large due to Python runtime
- Performance: Slightly slower than native Python (but often faster than server round-trips)
- React Native: Requires native bridge implementation (iOS/Android) We will see how can we optimize it in the long run
- Multi-Runtime Support: Pyodide, ONNX Runtime, and TensorFlow.js β
- ONNX Runtime Integration: High-performance inference with WebAssembly β
- TensorFlow.js Integration: Native TF.js support with GPU acceleration β
- Model caching and lazy loading
- Performance optimizations and benchmarks
- Advanced model bundling and compression
- More ML framework examples (PyTorch, Hugging Face)
- Advanced debugging and profiling tools
- Model versioning and A/B testing utilities
MIT Β© Shyam Sathish (https://github.com/ShyamSathish005) MIT Β© Siddharth B (https://github.com/Siddharth-B) MIT Β© Sathyanrayanaa. T (https://github.com/Sathyanrayanaa-T)
- Pyodide - Python in the browser
- React - UI library
- TypeScript - Type safety
Made with β€οΈ for developers who want to bring Python ML to the Frontend