Category Archives: Projects

Media Integration and Communication Centre projects

TANGerINE Cities

TANGerINE cities is a research project that investigates collaborative tangible applications. It was made within TANGerINE research project. This project is an ongoing research on TUIs (tangible user interfaces) combining previous experiences with natural vision-based gestural interaction on augmented surfaces and tabletops with the introduction of smart wireless objects and sensor fusion techniques.

TANGerINE Cities

TANGerINE Cities

Unlike passive recognized objects, common in mixed and augmented reality approaches, smart objects provide continuous data about their status through the embedded wireless sensors, while an external computer vision module tracks their position and orientation in space. Merging sensing data, the system is able to detect a richer language of gestures and manipulations both on the tabletop and in its surroundings, enabling for a more expressive interaction language across different scenarios.

Users are able to interact with the system and the objects in different contexts: the active presentation area (like the surface of the table) and the nearby area (around the table).

Presented at Frontiers of Interaction V (Rome, June 2009).

TANGerINE cities concept

TANGerINE cities let users choose and elaborate sounds characterizing today’s cities. TANGerINE cube collects sound fragments of the present and reassemble them in order to create a harmonic sounds for the future. TANGerINE cities is a mean of collective sound creation: a glimpse into the sound world of the future cities. TANGerINE cities imagines a future where technological development will have aided the reduction of metropolitan acoustic pollution, as transforming all noises into harmonic soundscape. The collaborative nature of TANGerINE table let users compare face-to-face their ideas as they forecast how noises of future cities will sound like. TANGerINE cities can use noises uploaded on the web by users who have recorded noises of their own sound worlds.  Therefore TANGerINE platform provides a real tangible location within the virtual Social Networks.

TANGerINE cities concept

TANGerINE cities concept

Shawbak

A technology transfer project realized for the international exhibition From Petra to Shawbak: archeology of a frontier. A multi-touch tableTop was realized for this exhibition that presents the results of the latest international archeology investigations and of the research conducted by the archaeological mission of the University of Florence in these past twenty years in Jordan at the sites of Petra and Shawbak, one of the most important historical areas in the world.

Natural interface realized for the international exhibition "From Petra to Shawbak"

Natural interface realized for the international exhibition "From Petra to Shawbak"

As of 2006, the Shawbak site has been the object of an innovative international Italian-Jordanian agreement of scientific and cultural cooperation between the Department of Antiquities of Jordan and the University of Florence, which combines archaeological research, conservative restoration and valorisation.

Planning the exhibition has offered the opportunity to experiment and re-elaborate the latest practises of exhibition communication, defined in Anglo-Saxon countries and, to date, inedited in Italian archaeology exhibitions, while museological design, defining the approach to exhibition communication, and conceiving a strategy for visitor learning, are all totally innovative.

The exhibition itinerary has been conceived in three sections: 1) the discovery of an authentic capital that reinterprets the Crusader presence of the Seigniory of Transjordan, and begins a succession that crosses the dynasty of Saladin and reaches us; 2) the documentation of the diverse role performed by the frontier as a historical key of interpretation: from the ancient age (Nabataean, Roman, Byzantine), Arab-Islamic (Umayyad, Abbasid, Fatimid) up to the Crusader-Ayyubid and Mameluke ages, explored through the archaeological observatory of the region and of the sites of Petra and Shawbak; 3) the collection and “publication” of visitors’ comments.

The interface design was built on the initial definition of the Information Architecture, based on the contents that the archaelogical research unit intended to deliver during the exhibition.

It appeared immediately evident that all the contents available were related to two different dimensions: the time period and the definition level.

The time span along with the fortress was studied is roughly divided in 5 parts:

  • 2nd crusade, “The coming of the Crusaders”;
  • 3rd crusade, “Rise and fall of the Crusaders”;
  • Ayyubid, “The Ayyubid conquest”;
  • Mamluk, “The rise of Mamluks”;
  • Ottoman, “The Ottoman expansion”.

The different level of resolution, or zoom detail, through which the territory can be explored are five as well: “Transjordan” region, “Shawbak” castle, “The fortified gate”, “Masonries” elevations, and “Stones”.

Contents are made of videos, pictures and texts that show and explain the archaeological site for each of the described time span and zoom level.

Localization and Mapping with a PTZ-Camera

Localization and Mapping with a robotic PTZ sensor aims to perform camera pose estimation while maintaining update the map of a wide area. While this has previously been attempted by adapting SLAM algorithms, no explicit varying focal length estimation has been introduced before and other methods do not address the problem of being operative for a long period of time.

Localization and Mapping with a PTZ-Camera

Localization and Mapping with a PTZ-Camera

In recent years, pan-tilt-zoom cameras are becoming increasingly common, especially for use as surveillance devices in large areas. Despite its widespread usage, there are still issues yet to be resolved regarding their effective exploitation for scene understanding at a distance. A typical operating scenario is that of abnormal behavior detection which requires both simultaneous target 3D trajectories analysis and the indispensable image resolution to perform target biometric recognition.

This cannot generally be achieved with a single stationary camera mainly because of the limited field of view and poor resolution with respect to scene depth. This will be crucial for the challenging task of managing the sensor to track/detect/recognize several targets at high resolution in 3D. In fact, similarly to the human visual system, this can be obtained slewing the video sensor from target to target and zooming in and out as necessary.

This challenging problem however has been largely neglected mostly because of the absence of reliable and robust approaches for PTZ camera localization and mapping with 3D tracking of targets as well. To this end we are interested in the acquisition and maintenance of an estimate of the camera zoom and orientation, relative to some geometric 3D representation of its surroundings, as the sensor performs pan-tilt and zoom operations over time.