2019 International Conference on Energy, Power, Environment and Computer Application (ICEPECA 2019)
ISBN: 978-1-60595-612-1
A Multi-View Texture Fusion Approach for High Quality
3D Face Modelling
Bing-chuan LI
1
, Yu-ping YE
1,2
, Zhan SONG
1,3,*
, Ling-sheng KONG
2
and Su-ming TANG
1,*
1
Shenzhen Key Laboratory of Virtual Reality and Human Interaction Technology, Shenzhen
Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
2
Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences,
Shenzhen, China
3
The Chinese University of Hong Kong, Hong Kong, China
*Corresponding author
Keywords: Texture mapping, Multi-view reconstruction, Visibility analysis.
Abstract. This paper presents an accurate means for the 3D face texture mapping based on a
multiple-view 3D scanning system. Given a reconstructed 3D mesh model and a set of calibrated
images, a high-quality texture mosaic of the surface can be created by the proposed method. The
proposed method aims to avoid noticeable seams, color discontinuity and ghosting problems. The
method performs in two steps, i.e. occlusion areas filtering and the fusion of different texture images.
The filtering is performed to identity invisible areas from the camera viewpoint and leaves them not
textured. To perform the occluded areas filtering, we combine the geometrical features of the objects
and the visual range of the camera. Experimental results showed that, texture mapping quality and
precision can be improved in comparison with conventional means.
Introduction
3D reconstruction is an important research topic in both computer vision and computer graphics
domains. It is a technology to study how to obtain the three-dimensional information of objects in real
world via the passive or active optical means. 3D shape reconstruction and texture reconstruction
from photographs are the two major aspects in the modeling of real objects. There have been a lot of
techniques which can generate image-based 3D models with high accuracy [1]. However, existing
texture mapping methods are usually lack of precision, especially for the texture images captured with
various viewpoints or the change of lighting conditions [2].
In the past decades, many sophisticated texture generation approaches have been proposed. Early
works have focused on different weighting heuristics to average overlapping textures. According to
the weighting function, these approaches can be generally categorized as Weighted Blending [3],
Multi-Band Blending [4] and Super-Resolution Maps [5] methods. Underlying principle of these
methods is to attach a color attribute to each vertex. Then blend the color intensities from multiple
views by calculating the weights of all visible views for each vertex. Some other methods generate a
texture map by collecting all texture patches together. The combination of texture patches has much in
common with the stitching and texture synthesis of planar images. Graph cut optimization and
gradient-domain techniques [6] are often used to improve the performance over intensity blending.
Instead of blending the color intensities from multiple views, some methods introduced Markov
Random Field (MRF) [7] into texture patch registration to optimize the texture alignment, which
considers both image visibility and color continuity, and computes the texture of each facet from only
one viewpoint. After the texture patch registration, the Poisson fusion [8] approach to deal with the
chromatic aberration between the sides of the patch joint. These methods have achieved good results
in texture fusion objects described in their respective papers. But there are still some problems like