ADGT is a model interpretability and understanding library for PyTorch.
ADGT means Attribution Draws Ground Truth contains general purpose implementations
of Saliency, InputXGradient, Deconv, LRP, Guided_BackProp, GradCAM, SmoothGrad, DeepLIFT, IntegratedGradients, RectGrad, FullGrad, CAMERAS, GIG, and others for PyTorch models. It provide users a quick and simple start for state-of-the-art modified-BP attribution methods.
*ADGT is currently in beta and under active development!*
## Installation
**Installation Requirements**
- Python >= 3.6
- PyTorch >= 1.2
- captum
##### Installing the latest release
You can just copy this code and install ADGT with
```
python setup.py install
```
or you can choose to install ADGT with pip
```
pip install ADGT
```
## Getting Started
Just three lines code, you can use ADGT to interpret why the target model make a decision on input images.
```python
import ADGT
adgt = ADGT.ADGT(use_cuda=True, name='ImageNet')
attribution=adgt.pure_explain(img, model, method, pth))
```
Note that img is the input image (pytorch tensor), model is the target model (pytorch model), method is the name of attribution methods (algorithms listed below), pth is the save path, the visualization of explanation results (see demo dir) are exported to this dir, if pth is None, it will not export such visualization, attribution is the attrubtion maps (pytorch tensor).
## References of Algorithms
* `Saliency`: [Deep Inside Convolutional Networks: Visualising
Image Classification Models and Saliency Maps, K. Simonyan, et. al. 2014](https://arxiv.org/pdf/1312.6034.pdf)
* `InputXGradient`: [Not Just a Black Box: Learning Important Features Through Propagating Activation Differences, Avanti Shrikumar et al. 2016](https://arxiv.org/abs/1605.01713)
* `Deconv`: [Visualizing and Understanding Convolutional Networks, Matthew D Zeiler et al. 2014](https://arxiv.org/pdf/1311.2901.pdf)
* `Guided_Backprop`: [Striving for Simplicity: The All Convolutional Net, Jost Tobias Springenberg et al. 2015](https://arxiv.org/pdf/1412.6806.pdf)
* `LRP`: [On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation, Sebastian Bach et al 2015](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140)
* `GradCAM`: [Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization, Ramprasaath R. Selvaraju et al. 2017](https://arxiv.org/abs/1610.02391.pdf)
* `SmoothGrad`: [SmoothGrad: removing noise by adding noise, Daniel Smilkov et al. 2017](https://arxiv.org/abs/1706.03825)
* `DeepLift`: [Learning Important Features Through Propagating Activation Differences, Avanti Shrikumar et al. 2017](https://arxiv.org/pdf/1704.02685.pdf) and [Towards better understanding of gradient-based attribution methods for deep neural networks, Marco Ancona et al. 2018](https://openreview.net/pdf?id=Sy21R9JAW)
* `IntegratedGradients`: [Axiomatic Attribution for Deep Networks, Mukund Sundararajan et al. 2017](https://arxiv.org/abs/1703.01365) and [Did the Model Understand the Question?, Pramod K. Mudrakarta, et al. 2018](https://arxiv.org/abs/1805.05492)
* `RectGrad`: [Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps, Beomsu Kim et al 2019](https://arxiv.org/pdf/1902.04893.pdf)
* `FullGrad`: [Full-Gradient Representation for Neural Network Visualization, Suraj Srinivas et al 2019](https://proceedings.neurips.cc/paper/2019/file/80537a945c7aaa788ccfcdf1b99b5d8f-Paper.pdf)
* `GIG`: [Guided Integrated Gradients: an Adaptive Path Method for Removing Noise, Andrei Kapishnikov et al 2021](https://openaccess.thecvf.com/content/CVPR2021/papers/Kapishnikov_Guided_Integrated_Gradients_An_Adaptive_Path_Method_for_Removing_Noise_CVPR_2021_paper.pdf)
* `CAMERAS`: [CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency, Mohammad A. A. K. Jalwana et al 2021](https://openaccess.thecvf.com/content/CVPR2021/papers/Jalwana_CAMERAS_Enhanced_Resolution_and_Sanity_Preserving_Class_Activation_Mapping_for_CVPR_2021_paper.pdf)
## License
ADGT is BSD licensed, as found in the [LICENSE](LICENSE) file.