معرفی شرکت ها


bam-intp-0.1


Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر
Card image cap
تبلیغات ما

مشتریان به طور فزاینده ای آنلاین هستند. تبلیغات می تواند به آنها کمک کند تا کسب و کار شما را پیدا کنند.

مشاهده بیشتر

توضیحات

Benchmarking attribution methods.
ویژگی مقدار
سیستم عامل -
نام فایل bam-intp-0.1
نام bam-intp
نسخه کتابخانه 0.1
نگهدارنده []
ایمیل نگهدارنده []
نویسنده Google Inc.
ایمیل نویسنده opensource@google.com
آدرس صفحه اصلی https://github.com/google-research-datasets/bam
آدرس اینترنتی https://pypi.org/project/bam-intp/
مجوز Apache 2.0
# BAM - Benchmarking Attribution Methods This repository contains dataset, models, and metrics for benchmarking attribution methods (BAM) described in paper [Benchmarking Attribution Methods with Relative Feature Importance](https://arxiv.org/abs/1907.09701). Upon using this library, please cite: ``` @Article{BAM2019, title = {{Benchmarking Attribution Methods with Relative Feature Importance}}, author = {Yang, Mengjiao and Kim, Been}, journal = {CoRR}, volume = {abs/1907.09701}, year = {2019} } ``` ## Setup Run the following from the home directory of this repository to install python dependencies, download BAM models, download [MSCOCO](http://cocodataset.org) and [MiniPlaces](https://github.com/CSAILVision/miniplaces), and construct BAM dataset. ``` pip install bam-intp source scripts/download_models.sh source scripts/download_datasets.sh python scripts/construct_bam_dataset.py ``` ## Dataset <img src="https://raw.githubusercontent.com/google-research-datasets/bam/master/figures/dataset_demo.png" width="800"> Images in `data/obj` and `data/scene` are the same but have object and scene labels respectively, as shown in the figure above. `val_loc.txt` records the top-left and bottom-right corner of the object and `val_mask` has the binary masks of the object in the validation set. Additional sets and their usage are described in the table below. Name | Training | Validation | Usage | Description :---------------------------------------------------------------------------------| :------: | :--------: | :----------------------------------- | :---------- `obj` | 90,000 | 10,000 | Model contrast | Objects and scenes with object labels `scene` | 90,000 | 10,000 | Model contrast & Input dependence | Objects and scenes with scene labels `scene_only` | 90,000 | 10,000 | Input dependence | Scene-only images with scene labels `dog_bedroom` | - | 200 | Relative model contrast | Dog in bedroom labeled as bedroom `bamboo_forest` | - | 100 | Input independence | Scene-only images of bamboo forest `bamboo_forest_patch` | - | 100 | Input independence | Bamboo forest with functionally insignificant dog patch ## Models Models in `models/obj`, `models/scene`, and `models/scene_only` are trained on `data/obj`, `data/scene`, and `data/scene_only` respectively. Models in `models/scenei` for `i` in `{1...10}` are trained on images where dog is added to `i` scene classes, and the rest scene classes do not contain any added objects. All models are in TensorFlow's [SavedModel](https://www.tensorflow.org/guide/saved_model) format. ## Metrics BAM metrics compare how interpretability methods perform across models (model contrast), across inputs to the same model (input dependence), and across functionally equivalent inputs (input independence). ### Model contrast scores Given images that contain both objects and scenes, model contrast measures the difference in attributions between the model trained on object labels and the model trained on scene labels. <img src="https://raw.githubusercontent.com/google-research-datasets/bam/master/figures/mc_demo.png" width="800"> ### Input dependence rate Given a model trained on scene labels, input dependence measures the percentage of inputs where the addition of objects results in the region being attributed as less important. <img src="https://raw.githubusercontent.com/google-research-datasets/bam/master/figures/id_demo.png" width="800"> ### Input independence rate Given a model trained on scene-only images, input independence measures the percentage of inputs where a functionally insignificant patch (e.g., a dog) does not affect explanations significantly. <img src="https://raw.githubusercontent.com/google-research-datasets/bam/master/figures/ii_demo.png" width="800"> ## Evaluate saliency methods To compute model contrast score (MCS) over randomly selected 10 images, you can run ``` python bam/metrics.py --metrics=MCS --num_imgs=10 ``` To compute input dependence rate (IDR), change `--metrics` to `IDR`. To compute input independence rate (IIR), you need to first constructs a set of functionally insignificant patches by running ``` python scripts/construct_delta_patch.py ``` and then evaluate IIR by running ``` python bam/metrics.py --metrics=IIR --num_imgs=10 ``` ## Evaluate TCAV [TCAV](https://github.com/tensorflow/tcav) is a global concept attribution method whose MCS can be measured by comparing the TCAV scores of a particular object concept for the object model and the scene model. Run the following to compute the TCAV scores of the dog concept for the object model. ``` python bam/run_tcav.py --model=obj ``` ## Disclaimer This is not an officially supported Google product.


نیازمندی

مقدار نام
- numpy
- tensorflow
>=6.0.0 Pillow
>=0.2.1 tcav
>=0.0.2 saliency
>=0.29.12 Cython
>=2.0.0 pycocotools
>=0.20.3 scikit-learn


نحوه نصب


نصب پکیج whl bam-intp-0.1:

    pip install bam-intp-0.1.whl


نصب پکیج tar.gz bam-intp-0.1:

    pip install bam-intp-0.1.tar.gz