# Spiking-Neural-Network
This is the python implementation of hardware efficient spiking neural network. It includes the modified learning and prediction rules which could be realised on hardware and are enegry efficient. Aim is to develop a network which could be used for on-chip learning as well as prediction.
Spike-Time Dependent Plasticity (STDP) algorithm will be used to train the network.
<p align="center">
<img src="http://www.kdnuggets.com/wp-content/uploads/neuron1.jpg" width="500"/>
</p>
## Network Elements
* [Neuron](neuron/)
* [Synapse](synapse/)
* [Receptive field](receptive_field/)
* [Spike train](encoding/)
## [SNN Simulator for Classification](classification/)
Assuming that we have learned the optimal weights of the network using the STDP algorithm (will be implemented next), this uses the weights to classify the input patterns into different classes. The simulator uses the 'winner-takes-all' strategy to supress the non firing neurons and produce distinguishable results. Steps involved while classifying the patterns are:
- For each input neuron membrane potential is calculated in its [receptive field](receptive_field/) (5x5 window).
- [Spike train](encoding/) is generated for each input neuron with spike frequency proportional to the membrane potential.
- Foe each image, at each time step, potential of the neuron is updated according to the input spike and the weights associated.
- First firing output neuron performs lateral inhibition on the rest of the output neurons.
- Simulator checks for output spike.
### Results
The simulator was tested upon binary classification. It can be extended upto any number of classes. The images for two classes are:
<img src="images/100.png" width="50"/> <img src="images/101.png" width="50"/>
Each of the classes were presented to the network for 1000 time units each. The activity of the neurons was recorded. Here are the graphs of the potential of output neurons versus time unit.
First 1000 TU corresponds to class1, next 1000 to class2. Red line indicates the threshold potential.
<img src="images/figure_11.png" width="300"/> <img src="images/figure_12.png" width="300"/> <img src="images/figure_13.png" width="300"/> <img src="images/figure_14.png" width="300"/>
The 1st output neuron is active for class1, 2nd is active for class2, and 3rd and 4th are mute for both the classes. Hence, by recording the total spikes in output neurons, we can determine the class to which the pattern belongs.
## [Training an SNN](training)
In the previous section we assumed that our network is trained i.e weights are learned using STDP and can be used to classify patterns. Here we'll see how STDP works and what all need to be taken care of while implementing this training algorithm.
### Spike Time Dependent Plasticity
STDP is actually a biological process used by brain to modify it's neural connections (synapses). Since the unmatched learning efficiency of brain has been appreciated since decades, this rule was incorporated in ANNs to train a neural network. Moulding of weights is based on the following two rules -
- Any synapse that contribute to the firing of a post-synaptic neuron should be made strong i.e it's value should be increased.
- Synapses that don't contribute to the firing of a post-synaptic neuron should be dimished i.e it's value should be decreased.
Here is an explanation of how this algorithm works:
Consider the scenario depicted in this figure
<p align="center">
<img src="images/spikes.jpg" width="350"/>
</p>
Four neurons connect to a single neuron by synapse. Each pre synaptic neuron is firing at its own rate and the spikes are sent forward by the corresponding synapse. The intensity of spike translated to post synaptic neuron depends upon the strength of the connecting synapse. Now, because of the input spikes membrane potential of post synaptic neuron increases and sends out a spike after crossing the threshold. At the time when post synaptic neuron spikes, we'll monitor which all pre synaptic neurons helped it to fire. This could be done by observing which pre synaptic neurons sent out spikes before post synaptic neuron spiked. This way they helped in post synaptic spike by increasing the membrane potential and hence the corresponding synapse is strengthend. The factor by which the weight of synapse is increased is inversly proportional to the time difference between post synaptic and pre synaptic spikes given by this graph
<p align="center">
<img src="images/stdp_curve.jpg" width="400"/>
</p>
### Generative Property of SNN
This property of Spiking Neural Network is very useful in analysing training process. All the synapses connected to an output layer neuron, if scaled to proper values and rearranged in form of an image, depicts what pattern that neuron has learned and how disctinctly it can classify that pattern. For an example, after training a network with MNIST dataset if we scale the weights of all the snypases connected to a particular output neuron (784 in number) and form a 28x28 image with those scaled up weights we will get a grayscale pattern learned by that neuron. This property will be used later while demonstrating the results. [This](training/reconstruct.py) file contains the function that reconstructs image from weights.
### Variable Threshold
In unsupervised learning it is very difficult to train a network where patterns have varied amount of activations (white pixels in case of MNIST). Patterns with higher activations tend to win in competetive learning and hence overshadow others (this problem will be demonstrated later). Therefore this method of normalization was introduced to bring them all down to same level. Threshold for each pattern is calculated based on the number of activation it contains. Higher the number of activations, higher is the threshold value. [This](training/var_th.py) file holds function to calculate threshold for each image.
### Lateral Inhibition
In neurobiology, lateral inhibition is the capacity of an excited neuron to reduce the activity of its neighbors. Lateral inhibition disables the spreading of action potentials from excited neurons to neighboring neurons in the lateral direction. This creates a contrast in stimulation that allows increased sensory perception. This propoerty is also called as Winner-Takes-All (WTA). The neuron that gets excited first inhibits (lowers down the membrane potential) of other neurons in same layer.
## Training for 3 class dataset
Here are the results after training an SNN using MNIST dataset with 3 classes (0-2) with 5 output neurons. We will leverage the generative property of SNN and reconstruct the images using trained weights connected to each output neuron to see how well the network has learned each pattern. Also, we see the membrane potential versus time plots for each output neuron to see how the training process was executes and made that neuron sensitive to a particular pattern only.
**Neuron1**
<img src="images/graph1.png" width="300"/> <img src="images/neuron1.png" width="70"/>
**Neuron2**
<img src="images/graph2.png" width="300"/> <img src="images/neuron2.png" width="70"/>
**Neuron3**
<img src="images/graph3.png" width="300"/> <img src="images/neuron3.png" width="70"/>
**Neuron4**
<img src="images/graph4.png" width="300"/> <img src="images/neuron4.png" width="70"/>
Here we can see clearly observe that Neuron 1 has learned pattern '1', Neuron 2 has learned '0', Neuron 3 is noise and Neuron 4 has learned '2'. Consider the plot of Neuron 1. In the beginning when the weights were randomly assigned it was firing for all the patterns. As the training proceeded, it became specific to pattern '1' only and was in inhibitory state for the rest. Onobserving Neuron 3 we can coclude that it reactsa to all the patterns and can be considered as noise. Hence, it is advisable to have 20% more output neurons than number of
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
脉冲神经网络Python可运行实例 (18701个子文件)
重命名.bat 459KB
gitignore 1B
gitignore 1B
gitignore 1B
1训练training.ipynb 592KB
1训练training-checkpoint.ipynb 149KB
mnist-checkpoint.ipynb 30KB
图片预处理.ipynb 23KB
图片预处理-checkpoint.ipynb 23KB
重建图像.ipynb 21KB
重建图像-checkpoint.ipynb 21KB
minst_神经元+突触+感受野+编码.ipynb 19KB
minst_神经元+突触+感受野+编码-checkpoint.ipynb 19KB
2测试classification.ipynb 19KB
2测试classification-checkpoint.ipynb 18KB
Untitled-checkpoint.ipynb 18KB
minst_神经元-checkpoint.ipynb 18KB
1.jpg 444KB
stdp_curve.jpg 85KB
spikes.jpg 10KB
LICENSE 11KB
README.md 9KB
README.md 1KB
README.md 1KB
README.md 855B
README.md 723B
rf.png 730KB
center.png 315KB
spikes.png 49KB
3.png 33KB
train.png 32KB
1.png 32KB
graph4.png 31KB
graph2.png 31KB
2.png 30KB
graph3.png 29KB
graph1.png 29KB
neuron.png 26KB
figure_11.png 26KB
figure_1.png 26KB
figure_2.png 24KB
figure_12.png 24KB
figure_4.png 18KB
figure_14.png 18KB
figure_3.png 17KB
figure_13.png 17KB
101.png 984B
100.png 970B
imp_train.png 830B
neuron2.png 797B
neuron1.png 779B
neuron4.png 774B
neuron3.png 759B
neuron3.png 486B
neuron61.png 481B
neuron2.png 426B
neuron1.png 425B
1585.png 379B
15955.png 379B
9277.png 376B
11259.png 374B
14307.png 374B
12878.png 373B
15257.png 373B
4210.png 373B
6463.png 373B
5361.png 373B
8684.png 372B
16492.png 372B
17425.png 372B
10717.png 371B
8523.png 370B
15804.png 370B
7462.png 369B
2973.png 369B
5643.png 369B
7229.png 368B
14645.png 368B
18041.png 367B
3472.png 367B
1284.png 367B
8277.png 367B
11791.png 366B
2195.png 366B
15997.png 366B
13891.png 365B
11478.png 365B
1755.png 365B
10091.png 365B
8897.png 365B
698.png 365B
3915.png 365B
49.png 365B
10132.png 365B
12489.png 365B
9353.png 364B
7152.png 364B
6520.png 364B
6600.png 364B
1858.png 364B
共 18701 条
- 1
- 2
- 3
- 4
- 5
- 6
- 188
资源评论
- weixin_433808572019-06-11浪费积分啊,这个代码在github就可以down
- weiweijjd2021-02-17代码有解释嘛
- gzh11981126752020-09-24我想问一下,就是这个代码有解释吗,或者是这段代码有解释的视频或者是文档吗
baidu_29900531
- 粉丝: 2
- 资源: 11
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功