-
Notifications
You must be signed in to change notification settings - Fork 1
Home
PETINA (Privacy prEservaTIoN Algorithms) is a modular Python library for differential privacy. It includes a wide range of privacy-preserving algorithms applicable to both supervised and unsupervised learning, supporting numerical and categorical data.
This wiki provides documentation, usage examples, and development guidelines for using and contributing to PETINA.
PETINA provides both centralized and local differential privacy mechanisms and is easy to integrate into existing ML pipelines.
To get started:
pip install PETINAOr clone and install from source:
git clone https://github.com/ORNL/PETINA.git
cd PETINA
pip install -e .PETINA includes:
- Differential Privacy Mechanisms: Gaussian, Laplace, Exponential, Sparse Vector, Unary Encoding, Histogram, etc.
- Sketching Algorithms: Count Sketch, Fast Projection
- Adaptive Mechanisms: Adaptive Clipping, Adaptive Pruning
- Utility Tools: Type conversion, encoding, parameter tuning, etc.
See the [Function Reference](https://github.com/ORNL/PETINA/wiki/Function-Reference) for a full list.
Example scripts are available in the examples/ folder of the repo:
-
[examples/supervised_experiment.py](https://github.com/ORNL/PETINA/blob/main/examples/supervised_experiment.py): Full pipeline from data loading to private training and evaluation. - More examples coming soon.
We welcome contributions!
- Open an issue to suggest or discuss features.
- Submit a pull request with clear description and appropriate tests.
- For questions, email a project member (see below).
See the [Contributing Guide](https://github.com/ORNL/PETINA/wiki/Contributing) for details.
PETINA is released under the MIT License.
If you use PETINA in your work, please cite it using the entry from OSTI:
https://www.osti.gov/doecode/biblio/149859
Project Lead: Oliver Kotevska
Thank you for using PETINA!