Within the past 15 years, there has been increasing interest in automated facial alignment within the computer vision and machine learning communities. Face alignment – the problem of automatically locating detailed facial landmarks across different subjects, illuminations, and viewpoints – is critical to all face analysis applications, such as identification, facial expression and action unit analysis, and in many human computer interaction and multimedia applications.
The most common approach is 2D alignment, which treats the face as a 2D object. This assumption holds as long as the face is frontal and planar. As face orientation varies from frontal, however, this assumption breaks down: 2D annotated points lose correspondence. Pose variation results in self occlusion that confounds landmark annotation.
To enable alignment that is robust to head rotation and depth variation, 3D imaging and alignment has been explored. 3D alignment, however, requires special sensors for imaging or multiple images and controlled illumination. When these assumptions cannot be met, which is common, 3D alignment from 2D video or images has been proposed as a potential solution.
This workshop addresses the increasing interest in 3D alignment from 2D images. This topic is germane to both computer vision and multimedia communities. For computer vision, it is an exciting approach to longstanding limitations of 2D approaches. For multimedia, 3D alignment enables more powerful applications.
3DFAW is intended to bring together computer vision and multimedia researchers whose work is related to 2D or 3D face alignment. We are soliciting original contributions which address a wide range of theoretical and application issues of 3D face alignment for computer vision applications and multimedia including, including but not limited to:
3DFAW Challenge evaluates 3D face alignment methods on a large, diverse corpora of multi-view face images annotated with 3D information. The corpora includes images obtained under a range of conditions from highly controlled to in-the-wild:
Figure 1. Top: an example of MultiPIE recording annotated with 3D ground truth. Bottom: time-slice images used to estimate the 3D shape of the face
All three sources have been annotated in a consistent way. 3D meshes that had large errors were eliminated. The participants of the 3DFAW Challenge will receive an annotated training set and a validation set without annotations.
To participate in the challenge please go to 3DFAW Challenge on CodaLab
|9:00-9:10||Welcome and opening|
Invited Talk 1
Theo Gevers and Roberto Valenti
Main track 3D Face Alignment without Correspondences
Zsolt Santa, Zoltan KatoBi-Level Multi-Column Convolutional Neural Networks for Facial Landmark Point Detection
Yanyu Xu, Shenghua GaoFully Automated and Highly Accurate Dense Correspondence for Facial Surfaces
Carl Martin Grewe, Stefan ZachowJoint Face Detection and Alignment with a Deformable Hough Transform Model
John McDonagh, Georgios Tzimiropoulos
Invited Talk 2Probabilistic Morphable Models
3DFAW Challenge data overview3D Face Alignment in the Wild: A Landmark-Free, Nose-Based Approach
Flavio B Zavan, Antônio Nascimento, Luan Silva, Olga Bellon, Luciano SilvaShape Augmented Regression for 3D Face Alignment
Chao Gou, Yue Wu, Fei-Yue Wang, Qiang JiFast and Precise Face Alignment and 3D Shape Reconstruction from a Single 2D Image
Ruiqi Zhao, Yan Wang, Fabian Benitez-Quiroz, Yaojie Liu, Aleix MartinezTwo-stage Convolutional Part Heatmap Regression for the 1st 3D Face Alignment in the Wild (3DFAW) Challenge
Adrian Bulat and Georgios Tzimiropoulos
 Gross, R., Matthews, I., Cohn, J., Kanade, T., and Baker, S. (2010). Multi-pie. Image and Vision Computing, 28(5), 807-813.
 Zhang, X., Yin, L., Cohn, J. F., Canavan, S., Reale, M., Horowitz, A., Liu, P., and Girard, J. M. (2014). BP4D-Spontaneous: a high-resolution spontaneous 3D dynamic facial expression database. Image and Vision Computing, 32(10), 692-706.
 Jeni, L. A., Cohn, J. F., and Kanade, T. (2015). Dense 3D face alignment from 2D videos in real-time. In Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on (Vol. 1, pp. 1-8). IEEE.