Tag Archives: natural interfaces

RIMSI: Integrated Research of Simulation Models

The RIMSI project, funded by Regione Toscana, includes study, experimentation and development of a protocol for the validation of procedures and implementation of a prototype multimedia software system to improve protocols and training in emergency medicine through the use of interactive simulation techniques.

RIMSI medical simulation

RIMSI – patient rianimation scene

Medical simulation software currently on the market can play  very simple scenarios (one patient) and an equally limited number of actors involved (usually only one doctor and a nurse). In addition,  “high-fidelity” simulation scenarios available are almost exclusively limited to the cardio-pulmonary resuscitation and emergency anesthesia. Finally, the user can impersonate a single role (doctor or nurse) while the other operator actions are controlled by the computer.

To overcome these important limitations of the programs currently available on the market, it is proposed the creation of a software capable of reproducing realistic scenarios (the inside of an emergency room, the scene of a car accident, etc. ..) with both single mode -user (the user controls the function of a single operator while the computer controls the other presonages) and multi-user (each user controls one of the actors in the scenario).

Our proposal is to develop a multi-user application that allows useres to interact both via mouse & keyboard and with body gestures. For this purpose we are currently developing a 3D trainig scenario in which learners would be able to interact through a Microsoft Kinect.

This work in progress will be presented during the Workshop on User Experience in e-Learning and Augmented Technologies in Education (UXeLATE) – ACM Multimedia, that will be held in Nara, Japan.

Onna: a natural interface system for virtual reconstruction

A technology transfer project for an international exhibition about the story of Onna, an italian town near to L’Aquila, which was affected by the earthquake during 2009.

A natural interaction based system is designed and developed in order to present a large number of multimedia contents (videos, images and audio) collected and created by the curators of the exhibition.

Interactive system scenario in the Infobox at Onna

Interactive system scenario in the Infobox at Onna

The project involves the study and development of an interactive system which adopts the paradigm of the natural interaction in order to allow users to access and consult multimedia contents related to different areas of the town of Onna, an italian town near to L’Aquila, which was affected by the earthquake during 2009.

The concept proposed for the user-interface is inspired by the educational game book published in the seventies about the devastation of the town of Pompei after the eruption of Vesuvius in 79 B.C. The pages of the book are composed of images of the destroyed Pompei which can be overlapped with images of the town before the eruption.

Our idea is to recreate a similar mode of interaction and use a background picture of the town of Onna after the earthquake so that the user can interact with some areas of the image and see them as they were before the earthquake. In addition for each area is possible to visualize multimedia contents about history, architecture and life before the earthquake.

The user-interface will be optimized in the environment of the exhibition in order to allow multiple users to interact independently with the system.

PointAt system at Palazzo Medici Riccardi

Palazzo Medici Riccardi is one of the most important museums in Florence: in its small chapel, it hosts the famous fresco “La cavalcata dei magi” (“The Journey of the Magi”) by Benozzo Gozzoli (1421–1497).

The PointAt system’s goal is to stimulate the visitors to interact with a digital version of the fresco and, at the same time, make them interact in the same way they will in the chapel, reinforcing their real experience with the fresco. That is to use information technology to make teaching attractive and effective.

PointAt at Palazzo Medici Riccardi

PointAt at Palazzo Medici Riccardi

Visitors are invited to stand in front of the screens and indicate with their hand the part of the painting that interests them. Two digital cameras analyse the visitors’ pointing action and a computer vision algorithm calculates the screen location where they’re pointing. The system then provides audio information about the subject.

In designing the system, we considered the following issues:

  • Easy and simple interaction. Visitors don’t need any instruction or have to wear any special device.
  • High-resolution display. The fresco is displayed on large screens so that visitors can appreciate even small particulars (almost invisible in the real chapel).
  • Interactivity for different categories of visitors. Interaction should be satisfactory for visitors who just want an idea about the fresco, for those who are attracted by particular characters and for those who want to have complete information on the whole fresco.
  • Not intrusive setting. The physical setting must host both active and passive visitors (for example, the relatives of the person who’s actually interacting with the system and those interested in listening but not in being active).
  • Pleasant look & feel. The interactive environment is integrated within the museum and it respects the visitors’ whole experience.

PointAt is considered to be a good vanguard experiment in the field of museum didactics, and has been functioning successfully since 2004.

TANGerINE Tales. Multi-role digital storymaking natural interface

TANGerINE Tales is a solution for multi-role digital storymaking based on the TANGerINE platform. The goal is to create a digital interactive system for children able to stimulate collaboration between users. The result concerns educational psychology in terms of respect of roles, development of literacy and of narrative skills.

Tangerine Tales

Testing Tangerine Tales

TANGerINE Tales lets children create and tell stories combining landscapes and characters chosen by themselves. Initially, children select the elements that will be part of the game and explore the environment within which they will create their own story. After that they have the chance to record their voice and the dynamics of the game. Finally, they are able to replay the self-made story on the interactive table.

The interaction between the system and users is performed through the tangible interface TANGerINE, consisting of two smart cubes (one for each child) and an interactive table. Users interact with the system through the manipulation of cubes that send data to the computer via a Bluetooth connection.

The main assumption is that the interaction takes place through the collaboration between two children who have different roles: one of them will actively interact to control the actions of the main character of the story, while the other will control the environmental events in response to the movements and actions of the character.

The target user of TANGerINE Tales is made up of 7-8 year olds, attending the third year of elementary school. This choice was made following research studies on psychological methods for collaborative learning, on Human Computer Interaction and tangible interfaces; we exploited the guidelines for learning supported by technological tools (computers, cell phones, tablet PCs, etc..) and those extrapolated by projects of storytelling for children.

You can see pictures of the interface on MICC Flickr account!

Multi-user interactive table for neurocognitive and neuromotor rehabilitation

This project concerns the design and development of a multi-touch system that provides innovative tools for neurocognitive and neuromotor rehabilitation for senile diseases. This project comes to life thanks to the collaboration between MICC, the Faculty of Psychology (University of Florence) and Montedomini A.S.P., a public agency for self sufficient and disabled elders that offers welfare and health care services.

A session of rehabilitation at Montedomini

A session of rehabilitation at Montedomini

The idea behind this project is to apply high-tech interactive devices to standard medical procedures used to rehabilitate desease patients with neurocognitive and neuromotor deficits. This new approach can offer new rehabilitative paths concerning digital training activities which means an advance upon conventional “pen and paper” approach.

neurocognitive neuromotor rehabilitation natural surface

Natural surface for neurocognitive and neuromotor rehabilitation

Such digital exercises will focus on:

  • attention
  • memory
  • perceptual disturbances
  • visuospatial disturbances
  • difficulties in executive functions

This new training tools based on interactive tables will be able to increase the stimulation of the patiens neuroplastic abilities. Our new rehabilitative paths, in fact, will provide:

  • audio-visual feedback for performance monitoring;
  • different difficulty degrees that can be graduated by the medical staff in relation to every single different patient through several parameters (e.g. response speed, exposure time of a stimulus, spatial distribution of stimuli, sensory channels involved, audiovisual tasks, number of stimuli to control and so on).

Innovative interactive surfaces will support the manipulation of digital contens on medium-large screens letting patiens and medical trainers interact through natural gestures for select, drag and zoom graphic objects. The interactive system will be even able to misure the activities of users storing the results of every rihabilitative session: in this way it is possible to provide a personal profile for every patient. Moreover, thanks to the collaborative nature of the system, we will introduce new training modalities which involve medical trainers and patients at the same time.

TANGerINE Grape

TANGerINE Grape is a collaborative knowledge sharing system that can be used through natural and tangible interfaces. The final goal is to enable users to enrich their knowledge through the attainment of information both from digital libraries and from the knowledge shared by other users involved in the same interaction session.

TANGerINE Grape

TANGerINE Grape

TANGerINE Grape is a collaborative tangible multi-user interface that allows users to perform semantic based content retrieval. Multimedia contents are organized through knowledgebase management structures (i.e. ontologies) and the interface allows a multi-user interaction with them through different input devices both in a co-located and remote environment.

TANGerINE Grape enables users to enrich their knowledge through the attainment of information both from an informative automatic system and from the knowledge shared by the other users involved: compared to a web-based interface, our system enables a collaborative face-to-face interaction together with the standard remote collaboration. Users, in fact, are allowed to interact with the system through different kind of input devices both in co-located or remote situation. In this way users enrich their knowledge even through the comparison with the other users involved in the same interaction session: they can share choices, results and comments. Face-to-face collaboration has also a ‘social’ value: co-located people involved in similar tasks improve their reciprocal personal/professional knowledge in terms of skills, culture, nature, interests and so on.

As use case we initially exploited the VIDI-Video project and then, to provide a faster response time and more advanced search possibilities, the IM3I project enhancing access to video contents by using its semantic search engine.

This project has been an important case study for the application of natural and tangible interaction research to the access to video content organized in semantic-based structures.

Multi-user environment for semantic search of multimedia contents

This research project exploits new technologies (multi-touch table and iPhone) in order to  develop a multi-user, multi-role and multi-modal system for multimedia content search, annotation and organization. As use case we considered the field of  broadcast journalism where editors and archivists work together in creating a film report using archive footage.

Multi user environment for semantic search of multimedia contents

Multi user environment for semantic search of multimedia contents

The idea behind this work-in-progress project is to create a multi-touch system that allows one or more users to search multimedia content, especially video, exploiting an ontology based structure for the knowledge management. Such system exploits a collaborative multi-role, multi-user and multi-modal interaction of two users performing different tasks within the application.

The first user plays the role of an archivist: by inserting a keyword through the iPhone, he is able to search and select data through an ontological structured interface designed ad-hoc for multi-touch table. At this stage the user can organize their results in  folders and subfolders: the iPhone is therefore used as a device for text input and for folders storage.

The other user performs the role of an editor: he receives the results of  the search carried out by the archivist through the system or the iPhone. This user examines the contents of the video search and select those that are most suitable for the final result, estimating how much the video is appropriate for his purposes (assessment for the current work session) and giving his opinion on the objective quality of the video (subjective assessment that can also influence future research). In addition, the user also plays the role of  an annotator: he can add more tags to the video if he considers them necessary to retrieve that content in future research.

CocoNUIT

This project aims to realize a lightweight, flexible and extensible Cocoa Framework to create Multitouch and more in general Tangible apps. It implements the basic gestures recognition and offers the possibility for each user to define and setup its owns gestures easily. Because of its nature we hope this framework will work good with Quartz and Core Animation to realize fun and useful apps. It offers also a lot of off-the-shelf widgets, ready to quick realize your own NUI app.

CocoNUIT: Cocoa Natural User Interface & Tangible

CocoNUIT: Cocoa Natural User Interface & Tangible

The growing interest in multitouch technologies and moreover in tangible user interfaces has been pushed forward by the development of system libraries designed with the aim of make it easier to implement graphical NHCI interfaces. More and more different commercial frameworks are becoming available, and even the open source community is increasingly interested in this field. Many of these projects present similarities, each one with its own limits and strenghts: SparshUI, pyMT and Cocoa Multi-touch Framework are only some examples.

When it comes to the evaluation of a NHCI framework, there are several attributes that have to be taken into account. One of the major requirements is defined by the input device independence; immediately second comes the flexibility towards the underlying technology that makes possible to understandthe different kind of interaction, thus making the framework independent to variations of the computer vision engine. The results of the elaboration must then be displayed through a user interface which has to offer a high throughput of graphical performances in order to meet the requierements described for a NHCI environment.

None of the available open source frameworks fully met the requirements defined for the project, thus leading to the development of a complete framework from scratch: CocoNUIT, the Cocoa Natural User Interface & Tangible. The framework is designed to be lightweight, flexible and extensible; based on Cocoa, the framework helps in the development of multitouch and tangible applications. It implements gesture recognition and let developers define and setup their own set of new gestures. The framework was built on top of the Cocoa technology in order to take advantage of Mac Os X accelerated graphical libraries for drawing and animation, such as Quartz 2D and CoreAnimation.

The CocoNUIT framework is divided in three basic modules:

  • event management
  • multitouch interface
  • gesture recognition

From a high level point of view, the computer vision engine sends all the interaction events performed by users to the framework. These events, or messages, are then dispatched to each graphical object, or layer, present on the interface. Each layer can understand if the touch is related to itself simply evaluating if the touch position coordinates belong to the layer area: in this case the layer activates the recognition procedures and if a gesture gives a positive match, the view is updated accordingly.It is clear that such design takes into account the software modularity: it is in fact easy to replace or add new input devices, or to extend the gesture recognition engine simply adding new ad-hoc implemented gesture classes.

TANGerINE Cities

TANGerINE cities is a research project that investigates collaborative tangible applications. It was made within TANGerINE research project. This project is an ongoing research on TUIs (tangible user interfaces) combining previous experiences with natural vision-based gestural interaction on augmented surfaces and tabletops with the introduction of smart wireless objects and sensor fusion techniques.

TANGerINE Cities

TANGerINE Cities

Unlike passive recognized objects, common in mixed and augmented reality approaches, smart objects provide continuous data about their status through the embedded wireless sensors, while an external computer vision module tracks their position and orientation in space. Merging sensing data, the system is able to detect a richer language of gestures and manipulations both on the tabletop and in its surroundings, enabling for a more expressive interaction language across different scenarios.

Users are able to interact with the system and the objects in different contexts: the active presentation area (like the surface of the table) and the nearby area (around the table).

Presented at Frontiers of Interaction V (Rome, June 2009).

TANGerINE cities concept

TANGerINE cities let users choose and elaborate sounds characterizing today’s cities. TANGerINE cube collects sound fragments of the present and reassemble them in order to create a harmonic sounds for the future. TANGerINE cities is a mean of collective sound creation: a glimpse into the sound world of the future cities. TANGerINE cities imagines a future where technological development will have aided the reduction of metropolitan acoustic pollution, as transforming all noises into harmonic soundscape. The collaborative nature of TANGerINE table let users compare face-to-face their ideas as they forecast how noises of future cities will sound like. TANGerINE cities can use noises uploaded on the web by users who have recorded noises of their own sound worlds.  Therefore TANGerINE platform provides a real tangible location within the virtual Social Networks.

TANGerINE cities concept

TANGerINE cities concept

Shawbak

A technology transfer project realized for the international exhibition From Petra to Shawbak: archeology of a frontier. A multi-touch tableTop was realized for this exhibition that presents the results of the latest international archeology investigations and of the research conducted by the archaeological mission of the University of Florence in these past twenty years in Jordan at the sites of Petra and Shawbak, one of the most important historical areas in the world.

Natural interface realized for the international exhibition "From Petra to Shawbak"

Natural interface realized for the international exhibition "From Petra to Shawbak"

As of 2006, the Shawbak site has been the object of an innovative international Italian-Jordanian agreement of scientific and cultural cooperation between the Department of Antiquities of Jordan and the University of Florence, which combines archaeological research, conservative restoration and valorisation.

Planning the exhibition has offered the opportunity to experiment and re-elaborate the latest practises of exhibition communication, defined in Anglo-Saxon countries and, to date, inedited in Italian archaeology exhibitions, while museological design, defining the approach to exhibition communication, and conceiving a strategy for visitor learning, are all totally innovative.

The exhibition itinerary has been conceived in three sections: 1) the discovery of an authentic capital that reinterprets the Crusader presence of the Seigniory of Transjordan, and begins a succession that crosses the dynasty of Saladin and reaches us; 2) the documentation of the diverse role performed by the frontier as a historical key of interpretation: from the ancient age (Nabataean, Roman, Byzantine), Arab-Islamic (Umayyad, Abbasid, Fatimid) up to the Crusader-Ayyubid and Mameluke ages, explored through the archaeological observatory of the region and of the sites of Petra and Shawbak; 3) the collection and “publication” of visitors’ comments.

The interface design was built on the initial definition of the Information Architecture, based on the contents that the archaelogical research unit intended to deliver during the exhibition.

It appeared immediately evident that all the contents available were related to two different dimensions: the time period and the definition level.

The time span along with the fortress was studied is roughly divided in 5 parts:

  • 2nd crusade, “The coming of the Crusaders”;
  • 3rd crusade, “Rise and fall of the Crusaders”;
  • Ayyubid, “The Ayyubid conquest”;
  • Mamluk, “The rise of Mamluks”;
  • Ottoman, “The Ottoman expansion”.

The different level of resolution, or zoom detail, through which the territory can be explored are five as well: “Transjordan” region, “Shawbak” castle, “The fortified gate”, “Masonries” elevations, and “Stones”.

Contents are made of videos, pictures and texts that show and explain the archaeological site for each of the described time span and zoom level.