SPP
===
Sparsity Preserving Projection, a feature extraction algorithm in Pattern Recognition area
+++
Author : Denglong Pan
pandenglong@gmail.com
+++
What is SPP
Refer to https://github.com/lamplampan/SPP/wiki#what-is-spp
# What is SPP
SPP, Sparsity Preserving Projection, is an unsupervised dimensionality reduction algorithm. It uses the minimum L1 norm to keep the data in sparse reconstruction.
SPP projections don't affect by the data rotation, scale or offset. SPP can classify the data instinct even though there is no given classified info.
### Sparse refactoring weight matrix
Training sample matrix
<img src="http://latex.codecogs.com/gif.latex?X&space;=&space;[x_{1},&space;x_{2},...,x_{n}]&space;\in&space;R^{m&space;\times&space;n}" title="X = [x_{1}, x_{2},...,x_{n}] \in R^{m \times n}" />
Use the weight vector <img src="http://latex.codecogs.com/gif.latex?s_{i}" title="s_{i}" /> in sparse reconstruction as the coefficient of <img src="http://latex.codecogs.com/gif.latex?x_{i}" title="x_{i}" />, to solve the minimum L1 norm problem. Define the equation set [1] below:
<img src="http://latex.codecogs.com/gif.latex?\underset{s_{i}}{min}||s_{i}||_{1}" title="\underset{s_{i}}{min}||s_{i}||_{l}" />
<img src="http://latex.codecogs.com/gif.latex?x_{i}&space;=&space;Xs_{i}" title="x_{i} = Xs_{i}" />
<img src="http://latex.codecogs.com/gif.latex?l&space;=&space;l^{T}s_{i}" title="l = l^{T}s_{i}" />
Define a sparse refactoring weight matrix below, in which <img src="http://latex.codecogs.com/gif.latex?\tilde{s}_{i}" title="\tilde{s}_{i}" /> is the optimal solution for equation set [1] :
<img src="http://latex.codecogs.com/gif.latex?S&space;=&space;[&space;\tilde{s}_{1},&space;\tilde{s}_{2},...,&space;\tilde{s}_{n}&space;]^{T}" title="S = [ \tilde{s}_{1}, \tilde{s}_{2},..., \tilde{s}_{n} ]^{T}" />
The weight vector <img src="http://latex.codecogs.com/gif.latex?s_{i}^{0}&space;=&space;[0,...,\alpha&space;_{i,j-1},&space;0&space;,\alpha&space;_{i,j+1}+...+0&space;]^{T}" title="s_{i}^{0} = [0,...,\alpha _{i,i-1}, 0 ,\alpha _{i,i+1}+...+0 ]^{T}" /> is sparse. Because it contains a lot of classes in the face recognition test samples.
The test samples should be as following:
<img src="http://latex.codecogs.com/gif.latex?x_{i}^{j}&space;=&space;0\cdot&space;x_{1}^{1}&space;+&space;...&space;+&space;\alpha&space;_{i,i-1}&space;\cdot&space;x_{i-1}^{j}&space;+&space;\alpha&space;_{i,i+1}&space;\cdot&space;x_{i+1}^{j}&space;+&space;...&space;+&space;0\cdot&space;x_{n}^{c}" title="x_{i}^{j} = 0\cdot x_{1}^{1} + ... + \alpha _{i,i-1} \cdot x_{i-1}^{j} + \alpha _{i,i+1} \cdot x_{i+1}^{j} + ... + 0\cdot x_{n}^{c}" />
We can change the equation set [1] to be following equation set [2] taken the residual into consideration, in which the <img src="http://latex.codecogs.com/gif.latex?\varepsilon" title="\varepsilon" /> is the residual:
<img src="http://latex.codecogs.com/gif.latex?\underset{s_{i},t}{min}||s_{i}||_{l}" title="\underset{s_{i},t}{min}||s_{i}||_{l}" />
<img src="http://latex.codecogs.com/gif.latex?||x_{i}&space;-&space;Xs_{i}||<\varepsilon" title="||x_{i} - Xs_{i}||<\varepsilon" />
<img src="http://latex.codecogs.com/gif.latex?l&space;=&space;l^{T}&space;s_{i}" title="l = l^{T} s_{i}" />
### Eigenvector extraction
We can define the following objective function [3] in order to find the projection of preserve optimal weight vector <img src="http://latex.codecogs.com/gif.latex?\tilde{s}_{i}" title="\tilde{s}_{i}" />
<img src="http://latex.codecogs.com/gif.latex?\underset{w}{min}\sum_{i=1}^{n}||w^{T}x_{i}&space;-&space;w^{T}X\tilde{s}_{i}||^{2}" title="\underset{w}{min}\sum_{i=1}^{n}||w^{T}x_{i} - w^{T}X\tilde{s}_{i}||^{2}" />
Pass the function above into below one through algebraic transformation, in which the <img src="http://latex.codecogs.com/gif.latex?S_{\beta&space;}&space;=&space;S&space;+&space;S^{T}&space;-S^{T}S" title="S_{\beta } = S + S^{T} -S^{T}S" />
<img src="http://latex.codecogs.com/gif.latex?\underset{w}{max}\frac{w^{T}XS_{\beta&space;}X^{T}w}{w^{T}XX^{T}w}" title="\underset{w}{max}\frac{w^{T}XS_{\beta }X^{T}w}{w^{T}XX^{T}w}" />
The eigenvector would be the maximum d eigenvalues in the following resolution.
<img src="http://latex.codecogs.com/gif.latex?XS_{\beta&space;}X^{T}w&space;=&space;\lambda&space;XX^{T}w" title="XS_{\beta }X^{T}w = \lambda XX^{T}w" />
### SPP algorithm
**Step 1** Use the equation set [1] or equation set [2] to calculate the weight matrix S. It can be calculated by the standard linear programming tools such as L1-magic etc.
**Step 2** Calculate the projection vector by objective function [3]. Then we can get the d maximum eigenvalues in the subspace and also get the corresponding eigenvectors.
### Test result on ORL face lib
Use PCA + SPP +SRC for the testing.
**Why use PCA here**
We use PCA here to reduce the dimensions. There are 92*112 = 10304 dimensions in each face sample. There are 40 kinds of faces in ORL lib. Each kinds of face contains 10 samples. If we use 5 in each kind of face as training samples, then the constructed matrix is 10304*40*5 . It will confront of two problems with so many dimensions:
1 MATLAB will report "OUT OF MEMORY" with so many dimensions matrix.
2 The row number is bigger than column, so that it should be a overdetermined equation. It cannot be solved by L1_MAGIC algorithm.
**Test results**
Use 5 samples in 40 kinds of face to train. Use the left samples to be tested. Set the residual to be 0.0001 . Set the extracted projected vectors to be 80.
The recognized rate is 93% when the PCA=80 .
+++
How to run the algorithm?
Refer to https://github.com/lamplampan/SPP/wiki#how-to-run
# How to run
**Step 1** : Config your ORL face lib in file orl_src.m . Default path is E:\ORL_face\orlnumtotal\ .
**Step 2** : Run orl_src.m
没有合适的资源?快使用搜索试试~ 我知道了~
机器视觉算法库-特征提取算法.7z
共136个文件
m:86个
png:11个
md:7个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 91 浏览量
2023-07-04
12:03:59
上传
评论
收藏 8.12MB 7Z 举报
温馨提示
机器视觉算法库-特征提取算法.7z
资源推荐
资源详情
资源评论
收起资源包目录
机器视觉算法库-特征提取算法.7z (136个子文件)
insidepoly_sglengine.c 6KB
insidepoly_dblengine.c 6KB
hog特征.docx 348KB
.gitattributes 483B
.gitattributes 66B
.gitignore 3KB
lena.jpg 256KB
flowers.jpg 133KB
GaborArray.jpg 95KB
anti-mass.jpg 24KB
4colors.JPG 16KB
brain.jpg 3KB
LICENSE 18KB
LICENSE 18KB
tiffread.m 25KB
constructW_cai.m 19KB
bdt_analysis.m 18KB
litekmeans.m 15KB
tsne_analysis.m 14KB
chenvese.m 14KB
hungarian.m 11KB
pca_analysis.m 9KB
insidepoly.m 8KB
PerViewNMF.m 7KB
gmm_analysis.m 7KB
GNMF_Multi.m 7KB
sp_dense_sift.m 6KB
l1eq_pd.m 6KB
Cell_orientation.m 5KB
Cell_area_convex.m 5KB
tsne_p.m 4KB
feature_extraction.m 4KB
orl_SPP_SRC.m 4KB
movingstd.m 4KB
GMultiNMF.m 4KB
sp_dense_color.m 4KB
kpca_train.m 3KB
hogtest.m 3KB
x2p.m 3KB
gaborFeatures.m 3KB
d2p.m 3KB
tsne.m 3KB
gaborFilterBank.m 3KB
CPsKPCA.m 2KB
computeCPs.m 2KB
GNMF.m 2KB
Cell_follicledistribution.m 2KB
maskcircle2.m 2KB
example.m 2KB
benchinpoly.m 2KB
tsne_d.m 2KB
demo chenvese.m 2KB
Cell_watershed.m 2KB
computePerf.m 1KB
eigen_reconstruction.m 1KB
Eigenface_f.m 1KB
Cell_follicle.m 1KB
kpca_test.m 1KB
NormalizeFea.m 1KB
EuDist2.m 1KB
plot_tsne.m 1KB
demo_digit.m 1KB
Cell_centri.m 1KB
nmi.m 1KB
gen_dgauss.m 1KB
compute_skeleton_pc.m 1KB
bestMap.m 1KB
normalize.m 1KB
comtupeLimit.m 1KB
demo4.m 1012B
plotResult.m 983B
demo3.m 980B
get_densefeature.m 914B
Laplacian_GK.m 897B
plotCPs.m 842B
reinitialization.m 824B
testinsidepoly.m 824B
checkstop.m 811B
demo1.m 799B
demo2.m 787B
computeDisL2.m 753B
kappa.m 739B
constructAM.m 717B
showphi.m 682B
computeDistMat.m 678B
orderpoints.m 596B
computeKM.m 470B
insidepoly_install.m 463B
Find_K_Max_Gen_Eigen.m 424B
Heaviside.m 376B
computeSW.m 372B
printResult.m 325B
NormalizeL1.m 311B
NormalizeL2.m 311B
constructL.m 219B
CalcMetrics.m 217B
constructW.m 215B
level_center_length.m 164B
insidepoly_sglengine.m 94B
insidepoly_dblengine.m 94B
共 136 条
- 1
- 2
资源评论
应用市场
- 粉丝: 464
- 资源: 3815
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功