Tag Archives: 3D face recognition

2D/3D Face Recognition

In this project, started in collaboration with the IRIS Computer Vision lab, University of Southern California, we address the problem of 2D/3D face recognition with a gallery containing 3D models of enrolled subjects and a probe set composed by only 2D imagery with pose variations. Raw 3D models are present in the gallery for each person, where each 3D model shows both a facial shape as a 3D mesh and a 2D component as a texture registered with the shape; by the other hand it is assumed to have only 2D images in the probe set.

2D/3D face recognition dataset

Facial shape as a 3D mesh and a 2D component as a texture registered with the shape

This scenario, defined as is, is an ill-posed problem considering the gap between the kind of information present in the gallery and the one available in the probe.

In experimental result we evaluate the reconstruction result about the 3D shape estimation from multiple 2D images and the face recognition pipeline implemented considering a range of facial poses in the probe set, up to ±45 degrees.

Future directions can be found by investigating a method that is able to fuse the 3D face modeling with the face recognition technique developed accounting for pose variations.

Recognition results

Results: baseline vs. our approach

Results: baseline vs. our approach

This worked was conducted by Iacopo Masi during his internship in 2012/2013at the IRIS Computer Vision lab, University of Southern California.

USC University of Southern California

USC University of Southern California

3D Face Recognition

In this research, we present a novel approach to 3D face matching that shows high effectiveness in distinguishing facial differences between distinct individuals from differences induced by non-neutral expressions within the same individual. We present an extensive comparative evaluation of performance with the FRGC v2.0 dataset and the SHREC08 dataset.

3D face recognition

3D face recognition

The approach takes into account geometrical information of the 3D face and encodes the relevant information into a compact representation in the form of a graph. Nodes of the graph represent equal width iso-geodesic facial stripes. Arcs between pairs of nodes are labeled with descriptors, referred to as 3D Weighted Walkthroughs (3DWWs), that capture the mutual relative spatial displacement between all the pairs of points of the corresponding stripes. Face partitioning into iso-geodesic stripes and 3DWWs together provide an approximate representation of local morphology of faces that exhibits smooth variations for changes induced by facial expressions. The graph-based representation permits very efficient matching for face recognition and is also suited to be employed for face identification in very large datasets with the support of appropriate index structures. The method obtained the best ranking at the SHREC 2008 contest for 3D face recognition.

SIFTPose: local pose estimation from a single scale invariant keypoint

The aim of this project is to develop a new method of estimating the poses of imaged scene surfaces provided that they can be locally approximated by their tangent planes. Our approach performs an accurate direct estimation by exploiting the robustness of scale invariant feature transform (SIFT). The results are representative of the state of the art for this challenging task.

Local pose estimation from a single scale invariant keypoint

Local pose estimation from a single scale invariant keypoint

Retrieving the poses of keypoints in addition to matching them is an essential task in many computer-vision applications to transform uncostrained problems into costrained ones. This project proposes a new method of estimating the poses of regions around keypoints provided that they can be considered locally planar. While this has previously been attempted by adapting iterative algorithms developed for template matching, no explicit accurate direct estimation has been introduced before. Our approach simultaneously learn the “nuisance residual” structure present in the detection and description steps of the SIFT algorithm allowing local perspective properties of distinctive features to be recovered through a homography. The system is trained using synthetic images generated from a single reference view of the surface.

The method produces accurate detailed and fine grained set of local pose which can also be applied to non rigid surfaces. In particular the accuracy and robustness of the method are representative of the state of the art for this challenging task. At present, we investigate the application of the estimated homographies for building a pose-invariant descriptor for 3D face recognition.