1st Workshop on 3D Face Alignment in the Wild (3DFAW) & Challenge

In conjunction with ECCV 2016, Amsterdam, The Netherlands


Within the past 15 years, there has been increasing interest in automated facial alignment within the computer vision and machine learning communities. Face alignment – the problem of automatically locating detailed facial landmarks across different subjects, illuminations, and viewpoints – is critical to all face analysis applications, such as identification, facial expression and action unit analysis, and in many human computer interaction and multimedia applications.

The most common approach is 2D alignment, which treats the face as a 2D object. This assumption holds as long as the face is frontal and planar. As face orientation varies from frontal, however, this assumption breaks down: 2D annotated points lose correspondence. Pose variation results in self occlusion that confounds landmark annotation.

To enable alignment that is robust to head rotation and depth variation, 3D imaging and alignment has been explored. 3D alignment, however, requires special sensors for imaging or multiple images and controlled illumination. When these assumptions cannot be met, which is common, 3D alignment from 2D video or images has been proposed as a potential solution.

This workshop addresses the increasing interest in 3D alignment from 2D images. This topic is germane to both computer vision and multimedia communities. For computer vision, it is an exciting approach to longstanding limitations of 2D approaches. For multimedia, 3D alignment enables more powerful applications.

3DFAW is intended to bring together computer vision and multimedia researchers whose work is related to 2D or 3D face alignment. We are soliciting original contributions which address a wide range of theoretical and application issues of 3D face alignment for computer vision applications and multimedia including, including but not limited to:

  • 3D and 2D face alignment from 2D dimensional images
  • Model- and stereo-based 3D face reconstruction
  • Dense and sparse face tracking from 2D and 3D dimensional inputs
  • Applications of face alignment
  • Face alignment for embedded and mobile devices
  • Facial expression retargeting (avatar animation)
  • Face alignment-based user interfaces

3DFAW Challenge evaluates 3D face alignment methods on a large, diverse corpora of multi-view face images annotated with 3D information. The corpora includes images obtained under a range of conditions from highly controlled to in-the-wild:

  • Multi-view images of MultiPIE [1]
  • Synthetically rendered images using BP4D Spontaneous dataset [2]
  • "In-the-wild” images and videos collected on the Internet, including 3DTV content and time-slice videos captured with the use of camera arrays. The depth information has been recovered using a novel dense model-based Structure from Motion technique [3]

Examples of 3DFAW corpus

Figure 1. Top: an example of MultiPIE recording annotated with 3D ground truth. Bottom: time-slice images used to estimate the 3D shape of the face

All three sources have been annotated in a consistent way. 3D meshes that had large errors were eliminated. The participants of the 3DFAW Challenge will receive an annotated training set and a validation set without annotations.

To participate in the challenge please go to 3DFAW Challenge on CodaLab

In order to obtain the data please download, fill out and sign the data license agreement and send it back to the organisers of the challenge

Program

9:00-9:10 Welcome and opening
9:10-10:10 Invited Talk 1

Theo Gevers and Roberto Valenti

10:10-10:40 Coffee break
10:40-12:00

Main track

3D Face Alignment without Correspondences

Zsolt Santa, Zoltan Kato

Bi-Level Multi-Column Convolutional Neural Networks for Facial Landmark Point Detection

Yanyu Xu, Shenghua Gao

Fully Automated and Highly Accurate Dense Correspondence for Facial Surfaces

Carl Martin Grewe, Stefan Zachow

Joint Face Detection and Alignment with a Deformable Hough Transform Model

John McDonagh, Georgios Tzimiropoulos

12:00-13:30 Lunch
13:30-14:30

Invited Talk 2

Probabilistic Morphable Models

Thomas Vetter

14:30-14:45 Coffee break
14:45-16:15

Challenge track

3DFAW Challenge data overview

3D Face Alignment in the Wild: A Landmark-Free, Nose-Based Approach

Flavio B Zavan, Antônio Nascimento, Luan Silva, Olga Bellon, Luciano Silva

Shape Augmented Regression for 3D Face Alignment

Chao Gou, Yue Wu, Fei-Yue Wang, Qiang Ji

Fast and Precise Face Alignment and 3D Shape Reconstruction from a Single 2D Image

Ruiqi Zhao, Yan Wang, Fabian Benitez-Quiroz, Yaojie Liu, Aleix Martinez

Two-stage Convolutional Part Heatmap Regression for the 1st 3D Face Alignment in the Wild (3DFAW) Challenge

Adrian Bulat and Georgios Tzimiropoulos

16:15-16:30 Closing remarks
  • Simon Lucey, Carnegie Mellon University, USA
  • Sergio Escalera, University of Barcelona, Spain
  • Yoichi Sato, University of Tokyo, Japan
  • Gabor Szirtes, RealEyes Inc
  • Jason Saragih, Oculus Inc
  • Qiang Ji, Rensselaer Polytechnic Institute, USA
  • Michel Valstar, University of Nottingham, UK
  • Abhinav Dhall, Australian National University, Australia
  • Roland Goecke, University of Canberra, Australia
  • Jixu Chen, Magic Leap, USA
  • Enver Sangineto, University of Trento, Italy
  • Xiaoming Liu, Michigan State University, USA
  • Kun Zhou, Zhejiang University, China
  • Hatice Gunes, University of Cambridge, UK
  • Dimitris Metaxas, Rutgers University, USA
  • Volker Blanz, University of Siegen, Germany

[1] Gross, R., Matthews, I., Cohn, J., Kanade, T., and Baker, S. (2010). Multi-pie. Image and Vision Computing, 28(5), 807-813.

[2] Zhang, X., Yin, L., Cohn, J. F., Canavan, S., Reale, M., Horowitz, A., Liu, P., and Girard, J. M. (2014). BP4D-Spontaneous: a high-resolution spontaneous 3D dynamic facial expression database. Image and Vision Computing, 32(10), 692-706.

[3] Jeni, L. A., Cohn, J. F., and Kanade, T. (2015). Dense 3D face alignment from 2D videos in real-time. In Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on (Vol. 1, pp. 1-8). IEEE.