About Lorenzo Seidenari

I’m currently a PhD student at University of Florence. My research interests are focused on application of pattern recognition and machine learning, computer vision specifically in the field of human activity recognition.

THUMOS large scale action recognition challenge: MICC ranked #2

Thumos ICCV Workshop on Action Recognition with a Large Number of Classes

MICC ranked #2 in the 2013 THUMOS large scale action recognition challenge. The THUMOS challenge is part of the First International Workshop on Action Recognition with a Large Number of Classes. The objective is to address for the first time the task of large scale action recognition with 101 actions classes appearing in a total of 13,320 video clips extracted from YouTube.

For this competition we built a bag-of-features pipeline based on a variety of features extracted from both video and keyframe modalities. In addition to the quantized, hard-assigned features provided by the organizers, we extracted local HOG and Motion Boundary Histogram (MBH) descriptors aligned with dense trajectories in video to capture motion. We encode them as Fisher vectors.

To represent action-specific scene context, we compute local SIFT pyramids on grayscale (P-SIFT) and opponent color keyframes (P-OSIFT) extracted as the central frame of each clip. From all these features we built a bag-of-features pipeline using late classifier fusion to combine scores of individual classifier outputs. We further used two complementary techniques that improve on the basic baseline with late fusion. First, we improve accuracy by using L1-regularized logistic regression (L1LRS) for stacking classifier outputs. Second, we show how with a Conditional Random Field (CRF) we can perform transductive labeling of test samples to further improve classification performance. Using our features we improve on those provided by the contest organizers by 8%, and after incorporating L1LRS and the CRF by more than 11%, reaching a final classification accuracy of 85.7%.

Kinect hand tracking and pose recognition

In the BSc Thesis project of Lorenzo Usai we exploited the OpenNI library together with the NITE middleware to track the hands of multiple users. The depth imagery allowed us to obtain a precise segmentation of the user hands.

Segmented RGB hand images are normalized with respect to the orientation and a fast descriptor based on an adaptation of SURF features is extracted; we train an SVM classifier with ~31000 images of 8 different subjecs to recognize hand poses (open/close).

A Kalman filter is used at the end of our recognition pipeline to smooth the prediction results, removing peaks of rare occasional failures of the hand pose classifier. The resulting recognition systems run at 15 frames per second and has an accuracy of 97.97% (tested on data independent from the training set).

FAST – Sobel realtime low level feature extraction with a surveillance camera

The growing mobility of people and goods has a very high societal cost in terms of traffic congestion and of fatalities and injured people every year. The management of a road network needs efficient ways for assessment at minimal costs.

Road monitoring is a relevant part of road management, especially for safety, optimal traffic flow and for investigating new sustainable transport patterns. Current monitoring systems based on video lack of optimal usage of networks and are difficult to be extended efficiently.

The ORUSSI project focuses on road monitoring through a network of roadside sensors (mainly cameras) that can be dynamically deployed and added to the surveillance systems in an efficient way.

The main objective of the project is to develop an optimized platform offering innovative real-time media (video and data) applications for road monitoring in real scenarios. We exploit low-level efficient image features in order to enable our distributed system to extract semantic information from the imagery and to optimize the video compression adaptively.