Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
vijay-arya authored Jun 29, 2023
1 parent 7d5b252 commit 5ffcf6e
Showing 1 changed file with 26 additions and 22 deletions.
48 changes: 26 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,22 +4,22 @@
[![Documentation Status](https://readthedocs.org/projects/aix360/badge/?version=latest)](https://aix360.readthedocs.io/en/latest/?badge=latest)
[![PyPI version](https://badge.fury.io/py/aix360.svg)](https://badge.fury.io/py/aix360)

The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. In addition to tabular, text and images, AIX360 is now expanded to support time series modality as well.
The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. The AI Explainability 360 toolkit supports tabular, text, images, and time series data.

The [AI Explainability 360 interactive experience](http://aix360.mybluemix.net/data) provides a gentle introduction to the concepts and capabilities by walking through an example use case for different consumer personas. The [tutorials and example notebooks](./examples) offer a deeper, data scientist-oriented introduction. The complete API is also available.

There is no single approach to explainability that works best. There are many ways to explain: data vs. model, directly interpretable vs. post hoc explanation, local vs. global, etc. It may therefore be confusing to figure out which algorithms are most appropriate for a given use case. To help, we have created some [guidance material](http://aix360.mybluemix.net/resources#guidance) and a [chart](./aix360/algorithms/README.md) that can be consulted.
There is no single approach to explainability that works best. There are many ways to explain: data vs. model, directly interpretable vs. post hoc explanation, local vs. global, etc. It may therefore be confusing to figure out which algorithms are most appropriate for a given use case. To help, we have created some [guidance material](http://aix360.mybluemix.net/resources#guidance) and a [taxonomy tree](./aix360/algorithms/README.md) that can be consulted.

We have developed the package with extensibility in mind. This library is still in development. We encourage you to contribute your explainability algorithms, metrics, and use cases. To get started as a contributor, please join the [AI Explainability 360 Community on Slack](https://aix360.slack.com) by requesting an invitation [here](https://join.slack.com/t/aix360/shared_invite/enQtNzEyOTAwOTk1NzY2LTM1ZTMwM2M4OWQzNjhmNGRiZjg3MmJiYTAzNDU1MTRiYTIyMjFhZTI4ZDUwM2M1MGYyODkwNzQ2OWQzMThlN2Q). Please review the instructions to contribute code and python notebooks [here](CONTRIBUTING.md).

## Supported explainability algorithms

### Data explanation
### Data explanations

- ProtoDash ([Gurumoorthy et al., 2019](https://arxiv.org/abs/1707.01212))
- Disentangled Inferred Prior VAE ([Kumar et al., 2018](https://openreview.net/forum?id=H1kG7GZAW))

### Local post-hoc explanation
### Local post-hoc explanations

- ProtoDash ([Gurumoorthy et al., 2019](https://arxiv.org/abs/1707.01212))
- Contrastive Explanations Method ([Dhurandhar et al., 2018](https://papers.nips.cc/paper/7340-explanations-based-on-the-missing-towards-contrastive-explanations-with-pertinent-negatives))
Expand All @@ -29,26 +29,26 @@ We have developed the package with extensibility in mind. This library is still
- LIME ([Ribeiro et al. 2016](https://arxiv.org/abs/1602.04938), [Github](https://github.com/marcotcr/lime))
- SHAP ([Lundberg, et al. 2017](http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions), [Github](https://github.com/slundberg/shap))

### Time-Series local post-hoc explanation
### Time-Series local post-hoc explanations

- Time Series Saliency Maps using Integrated Gradients (Inspired by [Sundararajan et al.](https://arxiv.org/pdf/1703.01365.pdf) )
- Time Series LIME (Time series adaptation of the classic paper by [Ribeiro et al. 2016](https://arxiv.org/abs/1602.04938) )
- Time Series Individual Conditional Expectation (Time series adaptation of Individual Conditional Expectation Plots [Goldstein et al.](https://arxiv.org/abs/1309.6392) )

### Local direct explanation
### Local direct explanations

- Teaching AI to Explain its Decisions ([Hind et al., 2019](https://doi.org/10.1145/3306618.3314273))
- Order Constraints in Optimal Transport ([Lim et al.,2022](https://arxiv.org/abs/2110.07275), [Github](https://github.com/IBM/otoc))

### Global direct explanation
### Global direct explanations

- Interpretable Model Differencing (IMD) ([Haldar et al., 2023](https://arxiv.org/abs/2306.06473))
- CoFrNets (Continued Fraction Nets) ([Puri et al., 2021](https://papers.nips.cc/paper/2021/file/b538f279cb2ca36268b23f557a831508-Paper.pdf))
- Boolean Decision Rules via Column Generation (Light Edition) ([Dash et al., 2018](https://papers.nips.cc/paper/7716-boolean-decision-rules-via-column-generation))
- Generalized Linear Rule Models ([Wei et al., 2019](http://proceedings.mlr.press/v97/wei19a.html))
- Fast Effective Rule Induction (Ripper) ([William W Cohen, 1995](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.107.2612&rep=rep1&type=pdf))

### Global post-hoc explanation 
### Global post-hoc explanations

- ProfWeight ([Dhurandhar et al., 2018](https://papers.nips.cc/paper/8231-improving-simple-models-with-confidence-profiles))

Expand All @@ -59,7 +59,7 @@ We have developed the package with extensibility in mind. This library is still

## Setup

Supported Configurations:
### Supported Configurations:

| Explainer | OS | Python version |
| ---------------| ------------------------------| -------------- |
Expand Down Expand Up @@ -96,7 +96,7 @@ Miniconda](https://conda.io/docs/user-guide/install/download.html#anaconda-or-mi
if you are curious) and can be installed from
[here](https://conda.io/miniconda.html) if you do not already have it.

Then, to create a new Python 3.10(or any of the supported python versions) environment, run:
Then, create a new python environment based on the explainability algorithms you wish to use by referring to the [table](https://github.com/Trusted-AI/AIX360/edit/master/README.md#supported-configurations) above. For example, for python 3.10, use the following command:

```bash
conda create --name aix360 python=3.10
Expand Down Expand Up @@ -130,35 +130,39 @@ their respective folders as described in
Then, navigate to the root directory of the project which contains `setup.py` file and run:

```bash
(aix360)$ pip install -e .
(aix360)$ pip install -e .[<algo1>, <algo2>, ...]
```
The above command installs packages required by specific algorithms. Here `<algo>` keyword refers to one or more explainability algorithms. For instance to install packages needed by BRCG, DIPVAE, and TSICE algorithms, one could use
```bash
(aix360)$ pip install -e .[rbm, dipvae, tsice]
```
The default command `pip install .` installs [default dependencies](https://github.com/Trusted-AI/AIX360/blob/462c4d575bfc71c5cbfd32ceacdb3df96a8dc2d1/setup.py#L9) alone.

If you face any issues, please try upgrading pip and setuptools and uninstall any previous versions of aix360 before attempting the above step again.
Note that you may not be able to install two algorithms that require different versions of python in the same environment (for instance `contrastive` along with `rbm`).

With the new setup.py, `pip install .` installs [default dependencies](https://github.com/Trusted-AI/AIX360/blob/462c4d575bfc71c5cbfd32ceacdb3df96a8dc2d1/setup.py#L9) only. To install dependencies of required algorithms, use `pip install .[algo1, algo2]`. An example is `pip install .[dipvae,cofrnet,tsice]`.
If you face any issues, please try upgrading pip and setuptools and uninstall any previous versions of aix360 before attempting the above step again.

```bash
(aix360)$ pip install --upgrade pip setuptools
(aix360)$ pip uninstall aix360
```

## Running in Docker

* Under `AIX360` directory build the container image from Dockerfile using `docker build -t aix360_docker .`
* Start the container image using command `docker run -it -p 8888:8888 aix360_docker:latest bash` assuming port 8888 is free on your machine.
* Inside the container start jupuyter lab using command `jupyter lab --allow-root --ip 0.0.0.0 --port 8888 --no-browser`
* Access the sample tutorials on your machine using URL `localhost:8888`

## PIP Installation of AI Explainability 360

If you would like to quickly start using the AI explainability 360 toolkit without cloning this repository, then you can install the [aix360 pypi package](https://pypi.org/project/aix360/) as follows.

```bash
(your environment)$ pip install aix360
(your environment)$ pip install aix360 .[<algo1>, <algo2>, ...]
```

If you follow this approach, you may need to download the notebooks in the [examples](./examples) folder separately.
If you follow this approach, you will need to download the notebooks available in the [examples](./examples) folder separately.

## Running in Docker

* Under `AIX360` directory build the container image from Dockerfile using `docker build -t aix360_docker .`
* Start the container image using command `docker run -it -p 8888:8888 aix360_docker:latest bash` assuming port 8888 is free on your machine.
* Inside the container start jupuyter lab using command `jupyter lab --allow-root --ip 0.0.0.0 --port 8888 --no-browser`
* Access the sample tutorials on your machine using URL `localhost:8888`

## Using AI Explainability 360

Expand Down

0 comments on commit 5ffcf6e

Please sign in to comment.