Mnemosyne is a research project carried out by the Media Integration and Communication Center – MICC, University of Florence along with Thales Italy SpA. and funded by the Tuscany region. The goal of the project is the study and experimentation of smart environments which adopts natural interaction paradigms for the promotion of artistic and cultural heritage by the analysis of visitors behaviors and activities.
The idea behind this project is to use techniques derived from videosurveillance to design an automatic profiling system capable of understanding the personal interest of each visitor. The computer vision system monitors and analyzes the movements and behaviors of visitors in the museum (through the use of fixed cameras) in order to extract a profile of interests for each visitor.
This profile of interest is then used to personalize the delivery of in-depth multimedia content enabling an augmented museum experience. Visitors interact with the multimedia content through a large interactive table installed inside the museum. The project also includes the integration of mobile devices (such as smartphones or tablets) offering a take-away summary of the visitor experience and suggesting possible theme-related paths in the collection of the museum or in other places of the city.
The system operates in a total respect of the privacy of the visitor: the cameras and the vision system only capture information on the appearance of the visitor such as color and texture of the clothes. The appearance of the visitor is encoded into a feature vector that captures its most distinctive elements. The feature vectors are then compared with each other to re-identify each visitor.
Mnemosyne is the first installation in a museum context of a computer vision system to provide visitors with personalized information on their individual interests. It is innovative because the visitor is not required to wear or carry special devices, or to take any action in front of the artworks of interest. The system will be installed, on a trial basis until June 2015, in the National Museum of the Bargello in the Hall of Donatello, in collaboration with the management of the Museum itself.
A technology transfer project for an international exhibition about the story of Onna, an italian town near to L’Aquila, which was affected by the earthquake during 2009.
A natural interaction based system is designed and developed in order to present a large number of multimedia contents (videos, images and audio) collected and created by the curators of the exhibition.
Interactive system scenario in the Infobox at Onna
The project involves the study and development of an interactive system which adopts the paradigm of the natural interaction in order to allow users to access and consult multimedia contents related to different areas of the town of Onna, an italian town near to L’Aquila, which was affected by the earthquake during 2009.
The concept proposed for the user-interface is inspired by the educational game book published in the seventies about the devastation of the town of Pompei after the eruption of Vesuvius in 79 B.C. The pages of the book are composed of images of the destroyed Pompei which can be overlapped with images of the town before the eruption.
Our idea is to recreate a similar mode of interaction and use a background picture of the town of Onna after the earthquake so that the user can interact with some areas of the image and see them as they were before the earthquake. In addition for each area is possible to visualize multimedia contents about history, architecture and life before the earthquake.
The user-interface will be optimized in the environment of the exhibition in order to allow multiple users to interact independently with the system.
TANGerINE Grape is a collaborative knowledge sharing system that can be used through natural and tangible interfaces. The final goal is to enable users to enrich their knowledge through the attainment of information both from digital libraries and from the knowledge shared by other users involved in the same interaction session.
TANGerINE Grape is a collaborative tangible multi-user interface that allows users to perform semantic based content retrieval. Multimedia contents are organized through knowledgebase management structures (i.e. ontologies) and the interface allows a multi-user interaction with them through different input devices both in a co-located and remote environment.
TANGerINE Grape enables users to enrich their knowledge through the attainment of information both from an informative automatic system and from the knowledge shared by the other users involved: compared to a web-based interface, our system enables a collaborative face-to-face interaction together with the standard remote collaboration. Users, in fact, are allowed to interact with the system through different kind of input devices both in co-located or remote situation. In this way users enrich their knowledge even through the comparison with the other users involved in the same interaction session: they can share choices, results and comments. Face-to-face collaboration has also a ‘social’ value: co-located people involved in similar tasks improve their reciprocal personal/professional knowledge in terms of skills, culture, nature, interests and so on.
As use case we initially exploited the VIDI-Video project and then, to provide a faster response time and more advanced search possibilities, the IM3I project enhancing access to video contents by using its semantic search engine.
This project has been an important case study for the application of natural and tangible interaction research to the access to video content organized in semantic-based structures.
This research project exploits new technologies (multi-touch table and iPhone) in order to develop a multi-user, multi-role and multi-modal system for multimedia content search, annotation and organization. As use case we considered the field of broadcast journalism where editors and archivists work together in creating a film report using archive footage.
Multi user environment for semantic search of multimedia contents
The idea behind this work-in-progress project is to create a multi-touch system that allows one or more users to search multimedia content, especially video, exploiting an ontology based structure for the knowledge management. Such system exploits a collaborative multi-role, multi-user and multi-modal interaction of two users performing different tasks within the application.
The first user plays the role of an archivist: by inserting a keyword through the iPhone, he is able to search and select data through an ontological structured interface designed ad-hoc for multi-touch table. At this stage the user can organize their results in folders and subfolders: the iPhone is therefore used as a device for text input and for folders storage.
The other user performs the role of an editor: he receives the results of the search carried out by the archivist through the system or the iPhone. This user examines the contents of the video search and select those that are most suitable for the final result, estimating how much the video is appropriate for his purposes (assessment for the current work session) and giving his opinion on the objective quality of the video (subjective assessment that can also influence future research). In addition, the user also plays the role of an annotator: he can add more tags to the video if he considers them necessary to retrieve that content in future research.
TANGerINE cities is a research project that investigates collaborative tangible applications. It was made within TANGerINE research project. This project is an ongoing research on TUIs (tangible user interfaces) combining previous experiences with natural vision-based gestural interaction on augmented surfaces and tabletops with the introduction of smart wireless objects and sensor fusion techniques.
Unlike passive recognized objects, common in mixed and augmented reality approaches, smart objects provide continuous data about their status through the embedded wireless sensors, while an external computer vision module tracks their position and orientation in space. Merging sensing data, the system is able to detect a richer language of gestures and manipulations both on the tabletop and in its surroundings, enabling for a more expressive interaction language across different scenarios.
Users are able to interact with the system and the objects in different contexts: the active presentation area (like the surface of the table) and the nearby area (around the table).
Presented at Frontiers of Interaction V (Rome, June 2009).
TANGerINE cities concept
TANGerINE cities let users choose and elaborate sounds characterizing today’s cities. TANGerINE cube collects sound fragments of the present and reassemble them in order to create a harmonic sounds for the future. TANGerINE cities is a mean of collective sound creation: a glimpse into the sound world of the future cities. TANGerINE cities imagines a future where technological development will have aided the reduction of metropolitan acoustic pollution, as transforming all noises into harmonic soundscape. The collaborative nature of TANGerINE table let users compare face-to-face their ideas as they forecast how noises of future cities will sound like. TANGerINE cities can use noises uploaded on the web by users who have recorded noises of their own sound worlds. Therefore TANGerINE platform provides a real tangible location within the virtual Social Networks.