Tag Archives: tableTop

Mnemosyne: smart environments for cultural heritage

Mnemosyne is a research project carried out by the Media Integration and Communication Center – MICC, University of Florence along with Thales Italy SpA. and funded by the Tuscany region. The goal of the project is the study and experimentation of smart environments which adopts natural interaction paradigms for the promotion of artistic and cultural heritage by the analysis of visitors behaviors and activities.

Mnemosyne Interactive Table at the Museum of Bargello

The idea behind this project is to use techniques derived from videosurveillance to design an automatic profiling system capable of understanding the personal interest of each visitor. The computer vision system monitors and analyzes the movements and behaviors of visitors in the museum (through the use of fixed cameras) in order to extract a profile of interests for each visitor.

This profile of interest is then used to personalize the delivery of in-depth multimedia content enabling an augmented museum experience. Visitors interact with the multimedia content through a large interactive table installed inside the museum. The project also includes the integration of mobile devices (such as smartphones or tablets) offering a take-away summary of the visitor experience and suggesting possible theme-related paths in the collection of the museum or in other places of the city.

The system operates in a total respect of the privacy of the visitor: the cameras and the vision system only capture information on the appearance of the visitor such as color and texture of the clothes. The appearance of the visitor is encoded into a feature vector that captures its most distinctive elements. The feature vectors are then compared with each other to re-identify each visitor.

Mnemosyne is the first installation in a museum context of a computer vision system to provide visitors with personalized information on their individual interests. It is innovative because the visitor is not required to wear or carry special devices, or to take any action in front of the artworks of interest. The system will be installed, on a trial basis until June 2015, in the National Museum of the Bargello in the Hall of Donatello, in collaboration with the management of the Museum itself.

The project required the work of six researchers (Svebor Karaman, Lea Landucci, Andrea Ferracani, Daniele Pezzatini, Federico Bartoli and Andrew D. Bagdanov) for four years. The installation is the first realization of the Competence Centre Regional NEMECH New Media for Cultural Heritage, made up of the Region of Tuscany and Florence University with the support of the City of Florence.


TANGerINE Grape is a collaborative knowledge sharing system that can be used through natural and tangible interfaces. The final goal is to enable users to enrich their knowledge through the attainment of information both from digital libraries and from the knowledge shared by other users involved in the same interaction session.



TANGerINE Grape is a collaborative tangible multi-user interface that allows users to perform semantic based content retrieval. Multimedia contents are organized through knowledgebase management structures (i.e. ontologies) and the interface allows a multi-user interaction with them through different input devices both in a co-located and remote environment.

TANGerINE Grape enables users to enrich their knowledge through the attainment of information both from an informative automatic system and from the knowledge shared by the other users involved: compared to a web-based interface, our system enables a collaborative face-to-face interaction together with the standard remote collaboration. Users, in fact, are allowed to interact with the system through different kind of input devices both in co-located or remote situation. In this way users enrich their knowledge even through the comparison with the other users involved in the same interaction session: they can share choices, results and comments. Face-to-face collaboration has also a ‘social’ value: co-located people involved in similar tasks improve their reciprocal personal/professional knowledge in terms of skills, culture, nature, interests and so on.

As use case we initially exploited the VIDI-Video project and then, to provide a faster response time and more advanced search possibilities, the IM3I project enhancing access to video contents by using its semantic search engine.

This project has been an important case study for the application of natural and tangible interaction research to the access to video content organized in semantic-based structures.


This project aims to realize a lightweight, flexible and extensible Cocoa Framework to create Multitouch and more in general Tangible apps. It implements the basic gestures recognition and offers the possibility for each user to define and setup its owns gestures easily. Because of its nature we hope this framework will work good with Quartz and Core Animation to realize fun and useful apps. It offers also a lot of off-the-shelf widgets, ready to quick realize your own NUI app.

CocoNUIT: Cocoa Natural User Interface & Tangible

CocoNUIT: Cocoa Natural User Interface & Tangible

The growing interest in multitouch technologies and moreover in tangible user interfaces has been pushed forward by the development of system libraries designed with the aim of make it easier to implement graphical NHCI interfaces. More and more different commercial frameworks are becoming available, and even the open source community is increasingly interested in this field. Many of these projects present similarities, each one with its own limits and strenghts: SparshUI, pyMT and Cocoa Multi-touch Framework are only some examples.

When it comes to the evaluation of a NHCI framework, there are several attributes that have to be taken into account. One of the major requirements is defined by the input device independence; immediately second comes the flexibility towards the underlying technology that makes possible to understandthe different kind of interaction, thus making the framework independent to variations of the computer vision engine. The results of the elaboration must then be displayed through a user interface which has to offer a high throughput of graphical performances in order to meet the requierements described for a NHCI environment.

None of the available open source frameworks fully met the requirements defined for the project, thus leading to the development of a complete framework from scratch: CocoNUIT, the Cocoa Natural User Interface & Tangible. The framework is designed to be lightweight, flexible and extensible; based on Cocoa, the framework helps in the development of multitouch and tangible applications. It implements gesture recognition and let developers define and setup their own set of new gestures. The framework was built on top of the Cocoa technology in order to take advantage of Mac Os X accelerated graphical libraries for drawing and animation, such as Quartz 2D and CoreAnimation.

The CocoNUIT framework is divided in three basic modules:

  • event management
  • multitouch interface
  • gesture recognition

From a high level point of view, the computer vision engine sends all the interaction events performed by users to the framework. These events, or messages, are then dispatched to each graphical object, or layer, present on the interface. Each layer can understand if the touch is related to itself simply evaluating if the touch position coordinates belong to the layer area: in this case the layer activates the recognition procedures and if a gesture gives a positive match, the view is updated accordingly.It is clear that such design takes into account the software modularity: it is in fact easy to replace or add new input devices, or to extend the gesture recognition engine simply adding new ad-hoc implemented gesture classes.

TANGerINE Cities

TANGerINE cities is a research project that investigates collaborative tangible applications. It was made within TANGerINE research project. This project is an ongoing research on TUIs (tangible user interfaces) combining previous experiences with natural vision-based gestural interaction on augmented surfaces and tabletops with the introduction of smart wireless objects and sensor fusion techniques.

TANGerINE Cities

TANGerINE Cities

Unlike passive recognized objects, common in mixed and augmented reality approaches, smart objects provide continuous data about their status through the embedded wireless sensors, while an external computer vision module tracks their position and orientation in space. Merging sensing data, the system is able to detect a richer language of gestures and manipulations both on the tabletop and in its surroundings, enabling for a more expressive interaction language across different scenarios.

Users are able to interact with the system and the objects in different contexts: the active presentation area (like the surface of the table) and the nearby area (around the table).

Presented at Frontiers of Interaction V (Rome, June 2009).

TANGerINE cities concept

TANGerINE cities let users choose and elaborate sounds characterizing today’s cities. TANGerINE cube collects sound fragments of the present and reassemble them in order to create a harmonic sounds for the future. TANGerINE cities is a mean of collective sound creation: a glimpse into the sound world of the future cities. TANGerINE cities imagines a future where technological development will have aided the reduction of metropolitan acoustic pollution, as transforming all noises into harmonic soundscape. The collaborative nature of TANGerINE table let users compare face-to-face their ideas as they forecast how noises of future cities will sound like. TANGerINE cities can use noises uploaded on the web by users who have recorded noises of their own sound worlds.  Therefore TANGerINE platform provides a real tangible location within the virtual Social Networks.

TANGerINE cities concept

TANGerINE cities concept


A technology transfer project realized for the international exhibition From Petra to Shawbak: archeology of a frontier. A multi-touch tableTop was realized for this exhibition that presents the results of the latest international archeology investigations and of the research conducted by the archaeological mission of the University of Florence in these past twenty years in Jordan at the sites of Petra and Shawbak, one of the most important historical areas in the world.

Natural interface realized for the international exhibition "From Petra to Shawbak"

Natural interface realized for the international exhibition "From Petra to Shawbak"

As of 2006, the Shawbak site has been the object of an innovative international Italian-Jordanian agreement of scientific and cultural cooperation between the Department of Antiquities of Jordan and the University of Florence, which combines archaeological research, conservative restoration and valorisation.

Planning the exhibition has offered the opportunity to experiment and re-elaborate the latest practises of exhibition communication, defined in Anglo-Saxon countries and, to date, inedited in Italian archaeology exhibitions, while museological design, defining the approach to exhibition communication, and conceiving a strategy for visitor learning, are all totally innovative.

The exhibition itinerary has been conceived in three sections: 1) the discovery of an authentic capital that reinterprets the Crusader presence of the Seigniory of Transjordan, and begins a succession that crosses the dynasty of Saladin and reaches us; 2) the documentation of the diverse role performed by the frontier as a historical key of interpretation: from the ancient age (Nabataean, Roman, Byzantine), Arab-Islamic (Umayyad, Abbasid, Fatimid) up to the Crusader-Ayyubid and Mameluke ages, explored through the archaeological observatory of the region and of the sites of Petra and Shawbak; 3) the collection and “publication” of visitors’ comments.

The interface design was built on the initial definition of the Information Architecture, based on the contents that the archaelogical research unit intended to deliver during the exhibition.

It appeared immediately evident that all the contents available were related to two different dimensions: the time period and the definition level.

The time span along with the fortress was studied is roughly divided in 5 parts:

  • 2nd crusade, “The coming of the Crusaders”;
  • 3rd crusade, “Rise and fall of the Crusaders”;
  • Ayyubid, “The Ayyubid conquest”;
  • Mamluk, “The rise of Mamluks”;
  • Ottoman, “The Ottoman expansion”.

The different level of resolution, or zoom detail, through which the territory can be explored are five as well: “Transjordan” region, “Shawbak” castle, “The fortified gate”, “Masonries” elevations, and “Stones”.

Contents are made of videos, pictures and texts that show and explain the archaeological site for each of the described time span and zoom level.