没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
Detection and Analysis of Hair
Yaser Yacoob and Larry S. Davis, Member, IEEE
Abstract—We develop computational models for measuring hair appearance for
comparing different people. The models and methods developed have
applications to person recognition and image indexing. An automatic hair detection
algorithm is described and results reported. A multidimensional representation of
hair appearance is presented and computational algorithms are described.
Results on a data set of 524 subjects are reported. Identification of people using
hair attributes is compared to eigenface-based recognition along with a joint,
eigenface-hair-based identification.
Index Terms—Human identification, face recognition, eigenfaces, hair detection.
æ
1BACKGROUND
HAIR is an important feature of human appearance, but its
detection, representation, analysis, and use have not been studied
in the computer vision community. Hair analysis has at least two
potential applications areas: human identification and ima ge
indexing of faces. It has been suggested [14] that humans employ
hair as a cue for face recognition. Specifically, it was shown that
hair is a prominent cue and that changes in hairstyle or facial hair
can mislead the observer in the recognition of faces. Also, [2]
contends, based on a survey of cue saliency, that hair is the most
important single feature for recognizing familiar faces, suggesting
that it should be advantageous to use in recognition. Since hair
appearance and attributes can so easily be changed, they have been
widely regarded as unstable features for human identification. The
fact is, however, that while humans can drastically manipulate
their hair to significantly alter their appearance, they typically do
not (i.e., the majority maintains a stable hair appearance while a
minority may significantly alter hair appearance even over short
periods). There is a variety of situations (e.g., partial face occlusion,
side views, and back views) where face recognition is not viable,
yet hair may provide a useful cue for identification or at least
narrowing possible matches. Moreover, identity verification may
also be improved by evaluation of hair attributes.
We are not aware of any prior work on hair detection,
representation, and use in the computer vision or image processing
communities. Perhaps, the exception is the work of [5] where hair
texture analysis using four measures: Gray level co-occurrence
matrix, gray level difference vector, gray level run length matrix,
and neighboring gray level dependence matrix were proposed.
However, hair has been an important research topic in computer
graphics and animation [6], [9].
An extensive discussion of hair properties and associated
attributes can be found in [1]. Hair can be represented along the
following dimensions: length, volume, surface area, dominant color,
coloring (i.e., color variations), forehead/outer hairline, density, baldness,
symmetry, split location, reflectance/shine, structural alteration (i.e.,
banded, layered, or braided hair), layering arrangement, texture, side-
burns, and facial hair cover. In the rest of the paper, we address
several of these dimensions. Structural alterations, layering,
density, and facial hair are not addressed due to the difficult
challenge of 3D recovery of shape properties or the difficulty of
observing them in typical image resolutions.
2APPROACH
A data set of 524 color images of subjects (1,600 1,200 and 768
576 pixels) taken in several locations (hair salons, on campus, and
social events) and over a period of a few months was collected
(multiethnic and balanced numbers of males and females). Out of
these, 126 faces were taken from the Martinez and Benavente
database [10]. This data set is used to evaluate the similarity
between the hair of subjects based on individual attributes of their
hair. A second data set consisting of more than 3,100 images
(126 subjects) taken from the Martinez and Benavente database [10]
is employed to assess the performance of person-identification
from single and aggregate hair attributes and eigen face-hai r
information.
2.1 Hair Detection
Hair is a highly variable feature of human appearance; it perhaps is
the most variant aspect of human appearance. Its automatic
detection is challenging; we describe an algorithm for automatic
detection of hair. We assume that faces are in frontal view. The
detection algorithm consists of the following steps (the first two are
available in the public domain and are not described in detail here):
. Face detection . Face detection has been reported by many
researchers (e.g., [7], [13]). We employ the algorithm based
on a cascade of boosted classifiers (part of Intel’s OpenCV)
to detect face regions in the image [7].
. Eye detec t io n .Wealsousethecascadeofboosted
classifiers to train eye detectors to locate the eyes within
a face region. Face and eye detection allow us to normalize
face sizes so hair representations can be compared.
. Skin color modeling. The subject-specific skin color is
modeled based on the automatic selection of three regions,
two are below the eyes and one at the forehead (see Fig. 1a).
The color model follows [4] and is discussed in Section 2.2.
This skin modeling a pproach takes into account the
possibility that some nonskin pixels may be present in
the rectangles.
. Head hair color modeling. Hair is assumed to be present
at one or more of three principle locations adjacent to facial
skin, namely, the right, middle, and left sides of the upper
face (thick white rectangles in Fig. 1a). The initial areas are
automatically set based on the location of the detected face
and eyes. The skin color model is used to identify nonskin
pixels in these regions, and these pixels form the seed to
separately model the hair color in each region. If the
distance between the three co lors (i.e., the distance
between the means of the RGB values of the colors) is
small, then the overall color is recalculated using the pixels
of the three regions; otherwise, the color is computed at the
forehead rectangle and is assumed to be the seed color. The
seed color is iteratively refined by computing the model of
the color of the rectangles above each of the current
rectangles, and examining if this color is close to the seed
color. If it is close, the current model is recalculated. The
process ends when the color of a rectangle is not close to
the seed color. Standard image processing techniques are
used to fill in holes in the hair region and create a
connected component.
Fig. 1a shows an example of automatic hair detection. The face
and eyes are detected and shown. The skin-color sampling areas
are shown as three green rectangles and the initial three sampling
areas for the hair are shown as thick white rectangles. The thinner
1164 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 28, NO. 7, JULY 2006
. The authors are with the Computer Vision Laboratory, University of
Maryland, College Park, MD 20742. E-mail: {yaser, lsd}@umiacs.umd.edu.
Manuscript received 4 Aug. 2004; revised 7 Oct. 2005; accepted 9 Jan. 2006;
published online 11 May 2006.
Recommended for acceptance by T. Tan.
For information on obtaining reprints of this article, please send e-mail to:
tpami@computer.org, and reference IEEECS Log Number TPAMI-0403-0804.
0162-8828/06/$20.00 ß 2006 IEEE Published by the IEEE Computer Society
资源评论
chunhuitang
- 粉丝: 1
- 资源: 12
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功