Wearable Smart Audio Guide featured on TechCrunch

Techcrunch covered our system while it was presented as a live demo at ACM MM 2016 in Amsterdam

Our smart audio guide is backed by a computer vision system capable to work in real-time on a mobile device, coupled with audio and motion sensors. We propose the use of a compact Convolutional Neural Network (CNN) that performs object classification and localization. Using the same CNN features computed for these tasks, we perform also robust artwork recognition. To improve the recognition accuracy we perform additional video processing using shape based filtering, artwork tracking and temporal filtering. The system has been deployed on a NVIDIA Jetson TK1 and a NVIDIA Shield Tablet K1, and tested in a real world environment (Bargello Museum of Florence).

 

 

 

This entry was posted in Uncategorized. Bookmark the permalink.

Comments are closed.