Florence Superface Dataset (UF-S)
The University of Florence Superface (UF-S) dataset comprises low-resolution and high-resolution 3D scans aiming to investigate innovative 3D face recognition solutions that use scans at different resolutions. Currently, 20 subjects are included in the dataset, but enrolling is still ongoing. For each subject, the dataset includes: (i) A 2D/3D video sequence acquired with the Microsoft Kinect. During capture, subjects sit in front of the camera with the face at an approximate distance of 80cm from the sensor. Subjects are also asked to slightly rotate the head around the yaw axis up to an angle of about 60-70 degrees, so that both the left and right side of the face are visible to the sensor. This results in video sequences lasting approximately 10 to 15 sec. Videos are released as a sequence of depth (16 bits) and rgb (24 bits) frames in PNG format; (ii) A 3D high-resolution face scan acquired with the 3dMD scanner: 3D mesh with about 40,000 vertices and 80,000 facets; texture stereo image with a resolution of 3341 x 2027 pixels. The geometry of the mesh is highly accurate with an average RMS error of about 0.2mm or better (VRML format).
Note: The dataset can be freely downloaded and used for research (no-profit) purposes from the project main page [UF-S]
Works that use this dataset must reference the following paper: S. Berretti, A. Del Bimbo, P. Pala. "Superfaces: A Super-resolution Model for 3D Faces", Fifth Workshop on Non-Rigid Shape Analysis and Deformable Image Alignment (NORDIA’12), in conjunction with the European Conference on Computer Vision (ECCV) 2012, pp.73-82, Florence, October 7, 2012
Florence 3D Actions Dataset (UF-Action 3D)
The dataset has been collected at the University of Florence during 2012. It includes video sequences acquired with the Kinect RGB-D camera. In the dataset there are 9 different activities: wave, drink from a bottle, answer phone,clap, tight lace, sit down, stand up, read watch, bow. During acquisition, 10 subjects were asked to perform the aboveactions for 2/3 times. This resulted in a total of 215 activity samples.
Note: The dataset can be freely dowload for research (non-profit) applications from the project main page [UF-Action 3D]
Works that use this dataset must reference the following paper: L. Seidenari, V. Varano, S. Berretti, P. Pala, A. Del Bimbo. "Recognizing Actions from Depth Cameras as Weakly Aligned Multi-Part Bag-of-Poses," International Workshop on Human Activity Understanding from 3D Data (HAU3D'13), in conjunction with the IEEE Conference on Computer VIsion and Pattern Recognition (CVPR) 2013, pp.479-485, Portland, Oregon, USA, June 24, 2013
mesh-LBP Matlab code
This code computes LBP-like descriptors on a a triangular mesh manifold keeping the simplicity and the elegance of the original LBP concept.
Note: The code ia vailable at Matlab central file exchange [code]
Works that use this dataset must reference the following paper, where full details about the mesh-LBP concept and method are given: N.Werghi, S. Berretti, A. del Bimbo, "The Mesh-LBP: A Framework for Extracting Local Binary Patterns From Discrete Manifolds," IEEE Transactions on Image Processing, vol.24, no.1, pp.220-235, January 2015
Florence 2D/3D Face Dataset (UF-3D)
A new face dataset under construction at the Media Integration and Communication Center (MICC) of the University of Florence. The dataset consists of high-resolution 3D scans of human faces from each subject, along with several video sequences of varying resolution and zoom level. Each subject is recorded in a controlled setting in HD video, then in a less-constrained (but still indoor) setting using a standard, PTZ surveillance camera, and finally in an unconstrained, outdoor environment with challenging conditions. In each sequence the subject is recorded at three levels of zoom. This dataset is being constructed specifically to support research on techniques that bridge the gap between 2D, appearance-based recognition techniques, and fully 3D approaches. It is designed to simulate, in a controlled fashion, realistic surveillance conditions and to probe the efficacy of exploiting 3D models in real scenarios.
Note: The dataset can be downloaded upon request. Please refer to the project main page [UF-3D]