Tag Archives: vision

Mnemosyne: smart environments for cultural heritage

Mnemosyne is a research project carried out by the Media Integration and Communication Center – MICC, University of Florence along with Thales Italy SpA. and funded by the Tuscany region. The goal of the project is the study and experimentation of smart environments which adopts natural interaction paradigms for the promotion of artistic and cultural heritage by the analysis of visitors behaviors and activities.

Mnemosyne Interactive Table at the Museum of Bargello

The idea behind this project is to use techniques derived from videosurveillance to design an automatic profiling system capable of understanding the personal interest of each visitor. The computer vision system monitors and analyzes the movements and behaviors of visitors in the museum (through the use of fixed cameras) in order to extract a profile of interests for each visitor.

This profile of interest is then used to personalize the delivery of in-depth multimedia content enabling an augmented museum experience. Visitors interact with the multimedia content through a large interactive table installed inside the museum. The project also includes the integration of mobile devices (such as smartphones or tablets) offering a take-away summary of the visitor experience and suggesting possible theme-related paths in the collection of the museum or in other places of the city.

The system operates in a total respect of the privacy of the visitor: the cameras and the vision system only capture information on the appearance of the visitor such as color and texture of the clothes. The appearance of the visitor is encoded into a feature vector that captures its most distinctive elements. The feature vectors are then compared with each other to re-identify each visitor.

Mnemosyne is the first installation in a museum context of a computer vision system to provide visitors with personalized information on their individual interests. It is innovative because the visitor is not required to wear or carry special devices, or to take any action in front of the artworks of interest. The system will be installed, on a trial basis until June 2015, in the National Museum of the Bargello in the Hall of Donatello, in collaboration with the management of the Museum itself.

The project required the work of six researchers (Svebor Karaman, Lea Landucci, Andrea Ferracani, Daniele Pezzatini, Federico Bartoli and Andrew D. Bagdanov) for four years. The installation is the first realization of the Competence Centre Regional NEMECH New Media for Cultural Heritage, made up of the Region of Tuscany and Florence University with the support of the City of Florence.

RIMSI: Integrated Research of Simulation Models

The RIMSI project, funded by Regione Toscana, includes study, experimentation and development of a protocol for the validation of procedures and implementation of a prototype multimedia software system to improve protocols and training in emergency medicine through the use of interactive simulation techniques.

RIMSI medical simulation

RIMSI – patient rianimation scene

Medical simulation software currently on the market can play  very simple scenarios (one patient) and an equally limited number of actors involved (usually only one doctor and a nurse). In addition,  “high-fidelity” simulation scenarios available are almost exclusively limited to the cardio-pulmonary resuscitation and emergency anesthesia. Finally, the user can impersonate a single role (doctor or nurse) while the other operator actions are controlled by the computer.

To overcome these important limitations of the programs currently available on the market, it is proposed the creation of a software capable of reproducing realistic scenarios (the inside of an emergency room, the scene of a car accident, etc. ..) with both single mode -user (the user controls the function of a single operator while the computer controls the other presonages) and multi-user (each user controls one of the actors in the scenario).

Our proposal is to develop a multi-user application that allows useres to interact both via mouse & keyboard and with body gestures. For this purpose we are currently developing a 3D trainig scenario in which learners would be able to interact through a Microsoft Kinect.

This work in progress will be presented during the Workshop on User Experience in e-Learning and Augmented Technologies in Education (UXeLATE) – ACM Multimedia, that will be held in Nara, Japan.

PointAt system at Palazzo Medici Riccardi

Palazzo Medici Riccardi is one of the most important museums in Florence: in its small chapel, it hosts the famous fresco “La cavalcata dei magi” (“The Journey of the Magi”) by Benozzo Gozzoli (1421–1497).

The PointAt system’s goal is to stimulate the visitors to interact with a digital version of the fresco and, at the same time, make them interact in the same way they will in the chapel, reinforcing their real experience with the fresco. That is to use information technology to make teaching attractive and effective.

PointAt at Palazzo Medici Riccardi

PointAt at Palazzo Medici Riccardi

Visitors are invited to stand in front of the screens and indicate with their hand the part of the painting that interests them. Two digital cameras analyse the visitors’ pointing action and a computer vision algorithm calculates the screen location where they’re pointing. The system then provides audio information about the subject.

In designing the system, we considered the following issues:

  • Easy and simple interaction. Visitors don’t need any instruction or have to wear any special device.
  • High-resolution display. The fresco is displayed on large screens so that visitors can appreciate even small particulars (almost invisible in the real chapel).
  • Interactivity for different categories of visitors. Interaction should be satisfactory for visitors who just want an idea about the fresco, for those who are attracted by particular characters and for those who want to have complete information on the whole fresco.
  • Not intrusive setting. The physical setting must host both active and passive visitors (for example, the relatives of the person who’s actually interacting with the system and those interested in listening but not in being active).
  • Pleasant look & feel. The interactive environment is integrated within the museum and it respects the visitors’ whole experience.

PointAt is considered to be a good vanguard experiment in the field of museum didactics, and has been functioning successfully since 2004.

TANGerINE Cities

TANGerINE cities is a research project that investigates collaborative tangible applications. It was made within TANGerINE research project. This project is an ongoing research on TUIs (tangible user interfaces) combining previous experiences with natural vision-based gestural interaction on augmented surfaces and tabletops with the introduction of smart wireless objects and sensor fusion techniques.

TANGerINE Cities

TANGerINE Cities

Unlike passive recognized objects, common in mixed and augmented reality approaches, smart objects provide continuous data about their status through the embedded wireless sensors, while an external computer vision module tracks their position and orientation in space. Merging sensing data, the system is able to detect a richer language of gestures and manipulations both on the tabletop and in its surroundings, enabling for a more expressive interaction language across different scenarios.

Users are able to interact with the system and the objects in different contexts: the active presentation area (like the surface of the table) and the nearby area (around the table).

Presented at Frontiers of Interaction V (Rome, June 2009).

TANGerINE cities concept

TANGerINE cities let users choose and elaborate sounds characterizing today’s cities. TANGerINE cube collects sound fragments of the present and reassemble them in order to create a harmonic sounds for the future. TANGerINE cities is a mean of collective sound creation: a glimpse into the sound world of the future cities. TANGerINE cities imagines a future where technological development will have aided the reduction of metropolitan acoustic pollution, as transforming all noises into harmonic soundscape. The collaborative nature of TANGerINE table let users compare face-to-face their ideas as they forecast how noises of future cities will sound like. TANGerINE cities can use noises uploaded on the web by users who have recorded noises of their own sound worlds.  Therefore TANGerINE platform provides a real tangible location within the virtual Social Networks.

TANGerINE cities concept

TANGerINE cities concept