没有合适的资源?快使用搜索试试~ 我知道了~
2012-表情识别-facial patch-Learning Active Facial Patches for Expres
需积分: 0 0 下载量 7 浏览量
2022-08-03
12:44:49
上传
评论
收藏 2.01MB PDF 举报
温馨提示
试读
8页
Learning Active Facial Patches for Expression AnalysisLin Zhong†, Qingshan Liu‡,
资源详情
资源评论
资源推荐
Learning Active Facial Patches for Expression Analysis
Lin Zhong
†
, Qingshan Liu
‡
, Peng Yang
†
, Bo Liu
†
, Junzhou Huang
§
, Dimitris N. Metaxas
†
†
Department of Computer Science, Rutgers University, Piscataway, NJ, 08854
‡
Nanjing University of Information Science and Technology, Nanjing, 210044, China
§
Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX, 76019
{linzhong,qsliu,peyang,lb507,dnm}@cs.rutgers.edu, Jzhuang@uta.edu
Abstract
In this paper, we present a new idea to analyze facial
expression by exploring some common and specific infor-
mation among different expressions. Inspired by the obser-
vation that only a few facial parts are active in expression
disclosure (e.g. around mouth, eye), we try to discover the
common and specific patches which are important to dis-
criminate all the expressions and only a particular expres-
sion, respectively. A two-stage multi-task sparse learning
(MTSL) framework is proposed to efficiently locate those
discriminative patches. In the first stage MTSL, expres-
sion recognition tasks, each of which aims to find dominant
patches for each expression, are combined to located com-
mon patches. Second, two related tasks, facial expression
recognition and face verification tasks, are coupled to learn
specific facial patches for individual expression. Extensive
experiments validate the existence and significance of com-
mon and specific patches. Utilizing these learned patches,
we achieve superior performances on expression recogni-
tion compared to the state-of-the-arts.
1. Introduction
Facial expressions play significant roles in our daily
communication. Recognizing these expressions has exten-
sive applications, such as human-computer interface, mul-
timedia, and security [21, 15, 23]. However, as the basis
of expression recognition, the exploration of the underline
functional facial features is still an open problem.
Studies in psychology show that facial features of ex-
pressions are located around mouth, nose, and eyes, and
their locations are essential for explaining and categoriz-
ing facial expressions. Through electrical muscle stimu-
lation, Duchenne [7, 1] found that most expressions are
invoked by a small number of facial muscles around the
mouth, nose and eyes (See Figure 1(a)). This indicates
that most of the descriptive regions for each expression are
(a) (b)
Figure 1. (a) Illustration of facial muscles distribution [7]. (b) Ma-
jor AUs for six expressions. The arrows represent for AUs.
located around certain face parts. Moreover, expressions
can be forcedly categorized into six popular ”basic expres-
sions” [10]: anger, disgust, fear, happiness, sadness and
surprise. As shown in Figure 1(b), each of these basic ex-
pressions can be further decomposed into a set of several
related action units (AUs) [8], e.g. happiness can be de-
composed to cheek raiser and lip corner puller. However,
non-existing methods statistically utilize these prior knowl-
edge about facial muscle and AUs to aid facial expression
analysis in computer vision.
Previous expression recognition methods can be gener-
ally categorized into two groups: AU-based methods and
appearance-based methods. AU-based methods [18, 19]
recognize expressions by detecting AUs, but all of them
suffer from the difficulties of AU detection. Appearance-
based methods[13, 25, 16] reveal the differences among ex-
pressions by facial appearance variations, which has been
proved to be more reliable on single images. However, these
methods assign weights to different face parts empirically,
thus it lacks statistical support for the weight settings. This
motivates us to fully make use of the prior knowledge from
facial muscles and AU studies to extract the most discrimi-
native regions, which can further assist expression analysis.
Inspired by the locations of AUs, we divide human face
into non-overlapping patches and then conceptually group
these patches into three categories: common facial patches,
1
1.本文分析不同
表情的通用信息
和特定信息
2.探索通用
patch和特定
patch在所有表
情和特定表情的
重要性
3.MTSL:
3.1 表情识别任
务:寻找每种表
情的重要区域,
结合定位通用
patches
3.2 面部表情识
别和人脸验证:
学习指定面部
patch的指定面
部patches
网络小精灵
- 粉丝: 25
- 资源: 335
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0