# Keras Visualization Toolkit
[![Build Status](https://travis-ci.org/raghakot/keras-vis.svg?branch=master)](https://travis-ci.org/raghakot/keras-vis)
[![license](https://img.shields.io/github/license/mashape/apistatus.svg?maxAge=2592000)](https://github.com/raghakot/keras-vis/blob/master/LICENSE)
[![Slack](https://img.shields.io/badge/slack-discussion-E01563.svg)](https://keras-vis.herokuapp.com/)
keras-vis is a high-level toolkit for visualizing and debugging your trained keras neural net models. Currently
supported visualizations include:
- Activation maximization
- Saliency maps
- Class activation maps
All visualizations by default support N-dimensional image inputs. i.e., it generalizes to N-dim image inputs
to your model.
The toolkit generalizes all of the above as energy minimization problems with a clean, easy to use,
and extendable interface. Compatible with both theano and tensorflow backends with 'channels_first', 'channels_last'
data format.
## Quick links
* Read the documentation at [https://raghakot.github.io/keras-vis](https://raghakot.github.io/keras-vis).
* The Japanese edition is [https://keisen.github.io/keras-vis-docs-ja](https://keisen.github.io/keras-vis-docs-ja).
* Join the slack [channel](https://keras-vis.herokuapp.com/) for questions/discussions.
* We are tracking new features/tasks in [waffle.io](https://waffle.io/raghakot/keras-vis). Would love it if you lend us
a hand and submit PRs.
## Getting Started
In image backprop problems, the goal is to generate an input image that minimizes some loss function.
Setting up an image backprop problem is easy.
**Define weighted loss function**
Various useful loss functions are defined in [losses](https://raghakot.github.io/keras-vis/vis.losses).
A custom loss function can be defined by implementing [Loss.build_loss](https://raghakot.github.io/keras-vis/vis.losses/#lossbuild_loss).
```python
from vis.losses import ActivationMaximization
from vis.regularizers import TotalVariation, LPNorm
filter_indices = [1, 2, 3]
# Tuple consists of (loss_function, weight)
# Add regularizers as needed.
losses = [
(ActivationMaximization(keras_layer, filter_indices), 1),
(LPNorm(model.input), 10),
(TotalVariation(model.input), 10)
]
```
**Configure optimizer to minimize weighted loss**
In order to generate natural looking images, image search space is constrained using regularization penalties.
Some common regularizers are defined in [regularizers](https://raghakot.github.io/keras-vis/vis.regularizers).
Like loss functions, custom regularizer can be defined by implementing
[Loss.build_loss](https://raghakot.github.io/keras-vis/vis.losses/#lossbuild_loss).
```python
from vis.optimizer import Optimizer
optimizer = Optimizer(model.input, losses)
opt_img, grads, _ = optimizer.minimize()
```
Concrete examples of various supported visualizations can be found in
[examples folder](https://github.com/raghakot/keras-vis/tree/master/examples).
## Installation
1) Install [keras](https://github.com/fchollet/keras/blob/master/README.md#installation)
with theano or tensorflow backend. Note that this library requires Keras > 2.0
2) Install keras-vis
> From sources
```bash
sudo python setup.py install
```
> PyPI package
```bash
sudo pip install keras-vis
```
## Visualizations
**NOTE: The links are currently broken and the entire documentation is being reworked.
Please see examples/ for samples.**
Neural nets are black boxes. In the recent years, several approaches for understanding and visualizing Convolutional
Networks have been developed in the literature. They give us a way to peer into the black boxes,
diagnose mis-classifications, and assess whether the network is over/under fitting.
Guided backprop can also be used to create [trippy art](https://deepdreamgenerator.com/gallery), neural/texture
[style transfer](https://github.com/jcjohnson/neural-style) among the list of other growing applications.
Various visualizations, documented in their own pages, are summarized here.
<hr/>
### [Conv filter visualization](https://raghakot.github.io/keras-vis/visualizations/conv_filters)
<img src="https://raw.githubusercontent.com/raghakot/keras-vis/master/images/conv_vis/cover.jpg?raw=true"/>
*Convolutional filters learn 'template matching' filters that maximize the output when a similar template
pattern is found in the input image. Visualize those templates via Activation Maximization.*
<hr/>
### [Dense layer visualization](https://raghakot.github.io/keras-vis/visualizations/dense)
<img src="https://raw.githubusercontent.com/raghakot/keras-vis/master/images/dense_vis/cover.png?raw=true"/>
*How can we assess whether a network is over/under fitting or generalizing well?*
<hr/>
### [Attention Maps](https://raghakot.github.io/keras-vis/visualizations/attention)
<img src="https://raw.githubusercontent.com/raghakot/keras-vis/master/images/attention_vis/cover.png?raw=true"/>
*How can we assess whether a network is attending to correct parts of the image in order to generate a decision?*
<hr/>
### Generating animated gif of optimization progress
It is possible to generate an animated gif of optimization progress by leveraging
[callbacks](https://raghakot.github.io/keras-vis/vis.callbacks). Following example shows how to visualize the
activation maximization for 'ouzel' class (output_index: 20).
```python
from vis.losses import ActivationMaximization
from vis.regularizers import TotalVariation, LPNorm
from vis.modifiers import Jitter
from vis.optimizer import Optimizer
from vis.callbacks import GifGenerator
from vis.utils.vggnet import VGG16
# Build the VGG16 network with ImageNet weights
model = VGG16(weights='imagenet', include_top=True)
print('Model loaded.')
# The name of the layer we want to visualize
# (see model definition in vggnet.py)
layer_name = 'predictions'
layer_dict = dict([(layer.name, layer) for layer in model.layers[1:]])
output_class = [20]
losses = [
(ActivationMaximization(layer_dict[layer_name], output_class), 2),
(LPNorm(model.input), 10),
(TotalVariation(model.input), 10)
]
opt = Optimizer(model.input, losses)
opt.minimize(max_iter=500, verbose=True, image_modifiers=[Jitter()], callbacks=[GifGenerator('opt_progress')])
```
Notice how the output jitters around? This is because we used [Jitter](https://raghakot.github.io/keras-vis/vis.modifiers/#jitter),
a kind of [ImageModifier](https://raghakot.github.io/keras-vis/vis.modifiers/#imagemodifier) that is known to produce
crisper activation maximization images. As an exercise, try:
- Without Jitter
- Varying various loss weights
![opt_progress](https://raw.githubusercontent.com/raghakot/keras-vis/master/images/opt_progress.gif?raw=true "Optimization progress")
<hr/>
## Citation
Please cite keras-vis in your publications if it helped your research. Here is an example BibTeX entry:
```
@misc{raghakotkerasvis,
title={keras-vis},
author={Kotikalapudi, Raghavendra and contributors},
year={2017},
publisher={GitHub},
howpublished={\url{https://github.com/raghakot/keras-vis}},
}
```
没有合适的资源?快使用搜索试试~ 我知道了~
基于Keras实现的音频分类系统
共87个文件
py:26个
ipynb:15个
png:11个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 82 浏览量
2022-07-13
07:16:18
上传
评论
收藏 53.53MB ZIP 举报
温馨提示
Python 3.6.5 Tensorflow - 1.7.0 Keras - 2.2.4 Numpy, Pandas, Matplotlib Librosa - 0.6.2 load_fma_dataset:加载fma_dataset并浏览它。 Plot_Spectograms:为8种不同的类型绘制频谱图 convert_to_npz:加载原始音频,将每个文件转换为频谱图,并对结果进行筛选,以便于训练模型。其输出是上面驱动器链接中的数据集 建筑模型 要运行下面的代码,请从驱动器下载已处理的数据 baseline_model_fma:该模型使用tracks.csv中的元数据来加载MFCC特征并构建SVC分类器。 CRNN_model:使用压缩的光谱图在Keras中构建CRNN模型 原始音频已经被转换成Mel-spectrograms
资源推荐
资源详情
资源评论
收起资源包目录
Music_Genre_Classification-master.zip (87个子文件)
Music_Genre_Classification-master
models
crnn
weights.best.hdf5 1.6MB
weights.best.h5 1.61MB
parallel
weights.best.hdf5 718KB
weights.best.h5 718KB
CNN_RNN_parallel.ipynb 317KB
project_report.pdf 1.85MB
Plot_Spectograms.ipynb 1.67MB
convert_to_npz.ipynb 218KB
CRNN_model.ipynb 334KB
baseline_model_fma.ipynb 166KB
Embedding_Clustering_CRNN.ipynb 76KB
Activation_Visualization.ipynb 2.68MB
keras_vis
MANIFEST.in 44B
.travis.yml 2KB
images
opt_progress.gif 13.48MB
attention_vis
grad-cam.png 529KB
saliency_map.png 119KB
cover.png 532KB
dense_vis
random_imagenet_no_tv.png 1002KB
random_imagenet.png 819KB
cover.png 220KB
ouzel_vis.png 220KB
conv_vis
block5_conv3_filters.jpg 80KB
block2_conv2_filters.jpg 110KB
block5_conv3_filters_no_tv.jpg 184KB
cover.jpg 125KB
block4_conv3_filters.jpg 125KB
block3_conv3_filters.jpg 144KB
filter_67.png 74KB
block1_conv2_filters.jpg 152KB
docs
templates
css
extras.css 2KB
visualizations
activation_maximization.md 5KB
saliency.md 5KB
class_activation_maps.md 2KB
mkdocs.yml 1KB
update_docs.py 1KB
__init__.py 0B
md_autogen.py 13KB
README.md 518B
pytest.ini 440B
ISSUE_TEMPLATE.md 678B
tests
vis
visualization
test_saliency.py 2KB
test_optimizer.py 2KB
utils
test_utils.py 1KB
backend
test_backend.py 6KB
resources
imagenet_class_index.json 35KB
LICENSE 1KB
CONTRIBUTING.md 1KB
setup.cfg 67B
examples
vggnet
images
ouzel1.jpg 128KB
ouzel2.jpg 144KB
activation_maximization.ipynb 23.66MB
attention.ipynb 1.96MB
mnist
activation_maximization.ipynb 411KB
attention.ipynb 378KB
resnet
attention.ipynb 1.98MB
setup.py 800B
.gitignore 151B
README.md 7KB
vis
callbacks.py 2KB
regularizers.py 4KB
visualization
__init__.py 2KB
activation_maximization.py 6KB
saliency.py 13KB
input_modifiers.py 4KB
losses.py 3KB
__init__.py 0B
backprop_modifiers.py 1KB
utils
test_utils.py 1KB
utils.py 11KB
__init__.py 0B
optimizer.py 8KB
grad_modifiers.py 1KB
backend
__init__.py 281B
theano_backend.py 719B
tensorflow_backend.py 4KB
.gitattributes 61B
applications
self_driving
images
straight.png 19KB
blank.png 7KB
left.png 14KB
visualize_attention.ipynb 284KB
weights.hdf5 4.69MB
model.py 845B
README.md 271B
load_fma_dataset.ipynb 104KB
.ipynb_checkpoints
convert_to_npz-checkpoint.ipynb 218KB
README.md 2KB
共 87 条
- 1
资源评论
自不量力的A同学
- 粉丝: 51
- 资源: 2710
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功