没有合适的资源?快使用搜索试试~ 我知道了~
Class Specific Centralized Dictionary Learning for Face Recognit...
0 下载量 118 浏览量
2021-02-12
00:22:16
上传
评论
收藏 1.84MB PDF 举报
温馨提示
Sparse representation based classification (SRC) and collaborative representation based classification (CRC) have demonstrated impressive performance for visual recognition. SRC and CRC assume that the training samples in each class contribute equally to.the dictionary and thus generate the dictionary that consists of the training samples in the corresponding class. This may lead to high residual error and instability, to the detriment of recognition performance. One solution is to use the clas
资源推荐
资源详情
资源评论
Multimed Tools Appl
DOI 10.1007/s11042-015-3042-2
Class specific centralized dictionary learning for face
recognition
Bao-Di Liu
1
· Liangke Gui
2
· Yuting Wang
5
·
Yu-Xiong Wang
2
· Bin Shen
3
· Xue Li
4
·
Yan-Jiang Wang
1
Received: 1 May 2015 / Revised: 15 August 2015 / Accepted: 22 October 2015
© Springer Science+Business Media New York 2015
Abstract Sparse representation based classification (SRC) and collaborative representation
based classification (CRC) have demonstrated impressive performance for visual recogni-
tion. SRC and CRC assume that the training samples in each class contribute equally to
the dictionary and thus generate the dictionary that consists of the training samples in the
corresponding class. This may lead to high residual error and instability, to the detriment
Bao-Di Liu
thu.liubaodi@gmail.com
Liangke Gui
liangkeg@cs.cmu.edu
Yuting Wang
utdyc@student.kit.edu
Yu-Xiong Wang
yuxiongw@cs.cmu.edu
Bin Shen
bshen@purdue.edu
Xue Li
lixue421@gmail.com
Yan-Jiang Wang
yjwang@upc.edu.cn
1
College of Information and Control Engineering, China University of Petroleum, Qingdao
266580, China
2
School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA
3
Department of Computer Science, Purdue University, West Lafayette, IN 47907, USA
4
Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
5
Department of Informatics, Karlsruhe Institute of Technology, Karlsruhe, 76131, Germany
Multimed Tools Appl
of recognition performance. One solution is to use the class specific dictionary learning
(CSDL) algorithm, which has greatly improved the classification accuracy. However, the
CSDL algorithm fails to consider the constraints to sparse codes. In particular, it cannot
guarantee that the sparse codes in the same class will be concentrated based on the learned
dictionary for each class. Such concentration is actually beneficial to classification. To
address these limitations, in this paper, we propose a class specific centralized dictionary
learning (CSCDL) algorithm to simultaneously consider the desired characteristics for both
dictionary and sparse codes. The blockwise coordinate descent algorithm and Lagrange
multipliers are used to optimize the corresponding objective function. Extensive experimen-
tal results on face recognition benchmark datasets demonstrate the superior performance of
our CSCDL algorithm compared with conventional approaches.
Keywords Centralized dictionary learning · Face recognition · Class specific
1 Introduction
After decades of effort, the power of dictionary learning for sparse representation has
been gradually revealed in visual computation areas, such as image annotation [25], image
inpainting [30], image classification [14, 16], face recognition [32], object detection [28],
transfer learning [29], and image denoising [6], due to its impressive performance. Differ-
ent from traditional decomposition frameworks like principal component analysis (PCA),
non-negative matrix factorization [31], and low-rank factorization [27], sparse representa-
tion allows coding under over-complete bases (that is, the number of bases is greater than
the dimension of input data), and thus generates sparse codes capable of representing the
data more adaptively.
Face recognition, one of the successful applications of sparse representation, is a classi-
cal yet challenging research topic in computer vision and pattern recognition [38]. Effective
face recognition usually involves two stages: 1) feature extraction, 2) classifier construc-
tion and label prediction. For the first stage, Eigenfaces were proposed by performing
PCA [24]. Laplacianfaces were proposed to preserve local information [9]. Fisherfaces
were suggested to maximize the ratio of between-class scatter to within-class scatter [1].
Yan et al. [33] proposed a multi-subregion based correlation filter bank algorithm to extract
both the global-based and local-based face features. For the latter stage, a nearest neighbor
method was proposed to predict the label of a test image using its nearest neighbors in the
training samples [5]. Nearest subspace methods were proposed to assign the label of a test
image by comparing its reconstruction error for each category [10, 22].
Under the nearest subspace framework, a sparse representation based classification
(SRC) system was proposed and achieved impressive performance [32]. Given a test sam-
ple, the sparse representation technique represents it as a sparse linear combination of the
training samples. The predicted label is determined by the residual error from each class. To
analyze SRC, collaborative representation based classification (CRC) was proposed as an
alternative approach [36]. CRC represents a test sample as the linear combination of almost
all the training samples. An interesting observation in [36] is that it is the collaborative rep-
resentation rather than the sparse representation that makes the nearest subspace method
powerful for classification.
Despite their promise, both SRC and CRC algorithms directly use the training sam-
ples as the dictionary for each class. By contrast, a well learned dictionary, especially by
Multimed Tools Appl
enforcing some discriminative criteria, can reduce the residual error greatly and achieve
superior performance for classification tasks. Existing discriminative dictionary learning
approaches are mainly categorized into three types: shared dictionary learning, class specific
dictionary learning, and hybrid dictionary learning. In shared dictionary learning, each basis
is associated to all the training samples. Mairal et al. [18] proposed to learn a discrimina-
tive dictionary with a linear classifier of coding coefficients. Liu et al. [17] learned a Fisher
discriminative dictionary. Zhang and Li [37] proposed a joint dictionary learning algorithm
for face recognition. In class specific dictionary learning, each basis only corresponds to a
single class so that the class specific reconstruction error could be used for classification.
Yang et al. [35] learned a dictionary for each class with sparse coefficients and applied it
for face recognition. Sprechmann and Sapiro [21] also learned a dictionary for each class
with sparse representation and used it in signal clustering. Castrodad and Sapiro [3] learned
a set of action specific dictionaries with non-negative penalty on both dictionary atoms
and representation coefficients. Wang et al. [26] introduced mutual incoherence informa-
tion to promote class specific dictionary learning in action recognition. Yang et al. [34]
embedded the Fisher discriminative information into class specific dictionary learning. Self-
explanatory sparse representation based dictionary learning was suggested to enhance the
interpretation of the class specific based dictionary learning algorithm [13].
The shared dictionary learning approaches usually lead to a dictionary of small size and
the discriminative information (i.e., the label information corresponding to coding coeffi-
cients) is embedded into the dictionary learning framework. The class specific dictionary
learning approaches usually focus on the classifier construction aspect since each basis vec-
tor is fixed to a single class label. The combination of shared basis vectors and class specific
basis vectors is then learned in hybrid dictionary learning. Zhou et al. [39] learned a hybrid
dictionary with Fisher regularization on the coding coefficient. Gao et al. [7] learned a
shared dictionary to encode common visual patterns and learned a class specific dictionary
to encode subtle visual differences among different categories for fine-grained image rep-
resentation. Liu et al. [12] proposed a hierarchical dictionary learning method to produce a
shared dictionary and a cluster specific dictionary. In spite of the demonstrated performance
of hybrid dictionary learning, it is still a challenge to balance between the shared dictionary
and the class specific dictionary.
Compared with SRC, the conventional class specific dictionary learning approach [35]
enhances the discrimination to some extent by learning a dictionary for each class. However,
it is likely that the sparse codes obtained by the corresponding dictionary in one class and the
sparse codes in other classes are interdependent, leading to erroneous discrimination. Such
interdependence among classes could be potentially reduced by centralized sparse codes.
In this paper, motivated by the superior performance of class specific dictionary learning
and the benefit of centralized sparse codes, we propose class specific centralized dictionary
learning (CSCDL) for sparse representation based classification. Our key insight is to make
the sparse codes in the same class concentrated. The main contribution is three-fold:
•
A novel class specific centralized dictionary learning (CSCDL) approach is proposed,
which is capable of guaranteeing the sparse codes in the same class concentrated.
•
Blockwise coordinate descent and Lagrange multipliers are used to efficiently solve the
corresponding optimization problems.
•
Our proposed CSCDL algorithm achieves superior performance on several benchmark
datasets for face recognition tasks, which demonstrates its effectiveness.
The rest of the paper is organized as follows. Section 2 reviews conventional sparse
representation based classification and collaborative representation based classification
剩余18页未读,继续阅读
资源评论
weixin_38725137
- 粉丝: 3
- 资源: 925
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 圣诞树代码编程python-11.第k个最小数-自己可以加加难度.py
- 圣诞树代码编程python-12.单词方阵-这条蛇怎么躺都可以是吧.py
- BrupSuite 是用于攻击web应用程序的集成平台
- DIN EN 1712-2002 焊缝的无损检验 焊接连接件的超声波检验 允许极限值.pdf
- DIN EN 1714-1997(2002) 中文版 焊接接头的超声波检测.pdf
- DIN EN 1792-2003 焊接 焊接和相关工多语种术语表.pdf
- DIN EN 12070-2000 焊接消耗材料.抗蠕变钢的电弧焊接用焊丝电极、焊丝和焊条.分类.pdf
- DIN EN 12071-2000 焊接消耗品 耐蠕变钢气体遮蔽金属弧焊用管状电极 分级.pdf
- DIN EN 12062-2002 焊接无损检测 金属材料的一般规则.pdf
- DIN EN 26848-1991 惰性气体保护电弧焊接以及等离子气体切割和焊接用的钨极 编码.pdf
- DIN EN 60534-3-3-2000 工业过程控制阀.第3-3部分尺寸.对头焊接的对头尺寸,两种方式,球型,直角型控制阀门.pdf
- DIN EN 62137-1-1-2008 表面安装技术 表面安装焊接点的环境和忍受力试验方法 第1-1部分 拉脱强度试验.pdf
- DIN EN ISO 17642-1-2004 金属材料的焊接的有损试验 焊件的冷裂试验 弧焊过程 第1部分通则 .pdf
- DIN EN ISO 10882-2-2000 焊接和相关工艺的卫生与安全 工作人员呼吸区域中空气中悬浮颗粒物及气体的取样 第2部分气体取样.pdf
- DIN_EN_970_1997-03焊接外观检验.pdf
- DIN EN ISO 18279-2004 铜焊.铜焊接头的缺陷.pdf
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功