adfa, p. 1, 2011.
© Springer-Verlag Berlin Heidelberg 2011
Null Space Diversity Fisher Discriminant Analysis for
Face Recognition
Xingzhu LIANG
a
, Yu’e LIN
b
, Gaoming YANG
c
, Guangyu XU
d
School of Computer Science and Engineering, Anhui University of Science and
Technology
Huainan, 232001, China
a
lxz9117@126.com,
b
linyu_e@126.com,
c
gmyang@aust.edu.cn,
d
gyxu@aust.edu.cn
Abstract.
The feature extraction algorithms, which attempt to project the original data in-
to a lower dimensional feature space , have drawn much attention. In this paper,
based on enhanced fisher discriminant criterion (EFDC), a new feature extrac-
tion method called null space diversity fisher discriminant analysis (NSDFDA)
is proposed for face recognition. NSDFDA based on a new optimization criteri-
on is presented, which means that all the discriminant vectors can be calculated
in the null space of the within-class scatter. Moreover, the proposed algorithm is
able to extract the orthogonal discriminant vectors in the feature space and sim-
ultaneously does not suffer from the small sample size problem, which is desir-
able for many pattern analysis applications. Experimental results on the Yale da-
tabase show the effectiveness of the proposed method.
Keywords: feature extraction, enhanced fisher discriminant criterion, null
space, the within-class scatter, the small sample size problems
1 Introduction
The feature extraction is a critical issue in face recognition tasks. The goal of fea-
ture extraction is to map high dimensional data samples to a lower dimensional space
such that certain properties are preserved. Among all the dimensionality reduction
methods, Fisher linear discriminant analysis (FLDA)[1] is the most popular method
and has been widely used in many classification applications. FLDA seeks to find
directions on which the ratio of the trace of the between-class matrix and the trace of
the within-class matrix is maximized. However, some recent research shows that the
samples may reside on a nonlinear submanifold. FLDA fails to discover the underly-
ing submanifold structure, due to the fact that it aims only to preserve the global
structures of the samples. Another technique called Locality Preserving Projections
(LPP) [2] has been proposed for dimensionality reduction, which can preserve the
intrinsic geometry of data. However, LPP has no direct relationship to classification.