MODEL-BASED 3-D FACE ANIMATION SYSTEM (LIP-SYNCHRONIZED) DESIGN FROM A VIDEO SOURCE

View/ Open
Date
2002Author
Choi, Kyung-Yung
Type
ThesisDegree Level
MastersMetadata
Show full item recordAbstract
This thesis proposes a model-based 3-D talking head animation system and then
constructs a simple 3-D face model and its animation by using Virtual Reality Modeling
Language (VRML) 2.0 in conjunction with a VRML's Application Programming
Interface (API), JAVA. The system extracts facial feature information from a digital
video source. The face detection and facial feature extraction are prerequisite stages
to track the key facial features throughout the video sequence. Face detection is done
by using relevant facial information contained in the normalized YCbCr color space.
Independent Component Analysis (ICA) approach is applied to the localized facial
images to identify major facial components of a face. Then, an image processing
approach is deployed to extract and track the key facial features precisely. Streams
of the extracted and determined facial feature parameters are transferred to the animation control points of the designed VRML 3-D facial model. Since the face model
is defined in the 3-D space while a given video source is a 2-D presentation, some
heuristic rules are embedded to estimate the coordinates of unmeasurable points for
the visually acceptable 3-D talking head model and animation. A standard video
data set Miss America, QCIF formatted 30Hz, video is used as a test sequence, and
the capability of the proposed system is verified and demonstrated.