Tag Archives: multi-touch

TANGerINE Tales. Multi-role digital storymaking natural interface

TANGerINE Tales is a solution for multi-role digital storymaking based on the TANGerINE platform. The goal is to create a digital interactive system for children able to stimulate collaboration between users. The result concerns educational psychology in terms of respect of roles, development of literacy and of narrative skills.

Tangerine Tales

Testing Tangerine Tales

TANGerINE Tales lets children create and tell stories combining landscapes and characters chosen by themselves. Initially, children select the elements that will be part of the game and explore the environment within which they will create their own story. After that they have the chance to record their voice and the dynamics of the game. Finally, they are able to replay the self-made story on the interactive table.

The interaction between the system and users is performed through the tangible interface TANGerINE, consisting of two smart cubes (one for each child) and an interactive table. Users interact with the system through the manipulation of cubes that send data to the computer via a Bluetooth connection.

The main assumption is that the interaction takes place through the collaboration between two children who have different roles: one of them will actively interact to control the actions of the main character of the story, while the other will control the environmental events in response to the movements and actions of the character.

The target user of TANGerINE Tales is made up of 7-8 year olds, attending the third year of elementary school. This choice was made following research studies on psychological methods for collaborative learning, on Human Computer Interaction and tangible interfaces; we exploited the guidelines for learning supported by technological tools (computers, cell phones, tablet PCs, etc..) and those extrapolated by projects of storytelling for children.

You can see pictures of the interface on MICC Flickr account!

Multi-user interactive table for neurocognitive and neuromotor rehabilitation

This project concerns the design and development of a multi-touch system that provides innovative tools for neurocognitive and neuromotor rehabilitation for senile diseases. This project comes to life thanks to the collaboration between MICC, the Faculty of Psychology (University of Florence) and Montedomini A.S.P., a public agency for self sufficient and disabled elders that offers welfare and health care services.

A session of rehabilitation at Montedomini

A session of rehabilitation at Montedomini

The idea behind this project is to apply high-tech interactive devices to standard medical procedures used to rehabilitate desease patients with neurocognitive and neuromotor deficits. This new approach can offer new rehabilitative paths concerning digital training activities which means an advance upon conventional “pen and paper” approach.

neurocognitive neuromotor rehabilitation natural surface

Natural surface for neurocognitive and neuromotor rehabilitation

Such digital exercises will focus on:

  • attention
  • memory
  • perceptual disturbances
  • visuospatial disturbances
  • difficulties in executive functions

This new training tools based on interactive tables will be able to increase the stimulation of the patiens neuroplastic abilities. Our new rehabilitative paths, in fact, will provide:

  • audio-visual feedback for performance monitoring;
  • different difficulty degrees that can be graduated by the medical staff in relation to every single different patient through several parameters (e.g. response speed, exposure time of a stimulus, spatial distribution of stimuli, sensory channels involved, audiovisual tasks, number of stimuli to control and so on).

Innovative interactive surfaces will support the manipulation of digital contens on medium-large screens letting patiens and medical trainers interact through natural gestures for select, drag and zoom graphic objects. The interactive system will be even able to misure the activities of users storing the results of every rihabilitative session: in this way it is possible to provide a personal profile for every patient. Moreover, thanks to the collaborative nature of the system, we will introduce new training modalities which involve medical trainers and patients at the same time.

Multi-user environment for semantic search of multimedia contents

This research project exploits new technologies (multi-touch table and iPhone) in order to  develop a multi-user, multi-role and multi-modal system for multimedia content search, annotation and organization. As use case we considered the field of  broadcast journalism where editors and archivists work together in creating a film report using archive footage.

Multi user environment for semantic search of multimedia contents

Multi user environment for semantic search of multimedia contents

The idea behind this work-in-progress project is to create a multi-touch system that allows one or more users to search multimedia content, especially video, exploiting an ontology based structure for the knowledge management. Such system exploits a collaborative multi-role, multi-user and multi-modal interaction of two users performing different tasks within the application.

The first user plays the role of an archivist: by inserting a keyword through the iPhone, he is able to search and select data through an ontological structured interface designed ad-hoc for multi-touch table. At this stage the user can organize their results in  folders and subfolders: the iPhone is therefore used as a device for text input and for folders storage.

The other user performs the role of an editor: he receives the results of  the search carried out by the archivist through the system or the iPhone. This user examines the contents of the video search and select those that are most suitable for the final result, estimating how much the video is appropriate for his purposes (assessment for the current work session) and giving his opinion on the objective quality of the video (subjective assessment that can also influence future research). In addition, the user also plays the role of  an annotator: he can add more tags to the video if he considers them necessary to retrieve that content in future research.

CocoNUIT

This project aims to realize a lightweight, flexible and extensible Cocoa Framework to create Multitouch and more in general Tangible apps. It implements the basic gestures recognition and offers the possibility for each user to define and setup its owns gestures easily. Because of its nature we hope this framework will work good with Quartz and Core Animation to realize fun and useful apps. It offers also a lot of off-the-shelf widgets, ready to quick realize your own NUI app.

CocoNUIT: Cocoa Natural User Interface & Tangible

CocoNUIT: Cocoa Natural User Interface & Tangible

The growing interest in multitouch technologies and moreover in tangible user interfaces has been pushed forward by the development of system libraries designed with the aim of make it easier to implement graphical NHCI interfaces. More and more different commercial frameworks are becoming available, and even the open source community is increasingly interested in this field. Many of these projects present similarities, each one with its own limits and strenghts: SparshUI, pyMT and Cocoa Multi-touch Framework are only some examples.

When it comes to the evaluation of a NHCI framework, there are several attributes that have to be taken into account. One of the major requirements is defined by the input device independence; immediately second comes the flexibility towards the underlying technology that makes possible to understandthe different kind of interaction, thus making the framework independent to variations of the computer vision engine. The results of the elaboration must then be displayed through a user interface which has to offer a high throughput of graphical performances in order to meet the requierements described for a NHCI environment.

None of the available open source frameworks fully met the requirements defined for the project, thus leading to the development of a complete framework from scratch: CocoNUIT, the Cocoa Natural User Interface & Tangible. The framework is designed to be lightweight, flexible and extensible; based on Cocoa, the framework helps in the development of multitouch and tangible applications. It implements gesture recognition and let developers define and setup their own set of new gestures. The framework was built on top of the Cocoa technology in order to take advantage of Mac Os X accelerated graphical libraries for drawing and animation, such as Quartz 2D and CoreAnimation.

The CocoNUIT framework is divided in three basic modules:

  • event management
  • multitouch interface
  • gesture recognition

From a high level point of view, the computer vision engine sends all the interaction events performed by users to the framework. These events, or messages, are then dispatched to each graphical object, or layer, present on the interface. Each layer can understand if the touch is related to itself simply evaluating if the touch position coordinates belong to the layer area: in this case the layer activates the recognition procedures and if a gesture gives a positive match, the view is updated accordingly.It is clear that such design takes into account the software modularity: it is in fact easy to replace or add new input devices, or to extend the gesture recognition engine simply adding new ad-hoc implemented gesture classes.

Shawbak

A technology transfer project realized for the international exhibition From Petra to Shawbak: archeology of a frontier. A multi-touch tableTop was realized for this exhibition that presents the results of the latest international archeology investigations and of the research conducted by the archaeological mission of the University of Florence in these past twenty years in Jordan at the sites of Petra and Shawbak, one of the most important historical areas in the world.

Natural interface realized for the international exhibition "From Petra to Shawbak"

Natural interface realized for the international exhibition "From Petra to Shawbak"

As of 2006, the Shawbak site has been the object of an innovative international Italian-Jordanian agreement of scientific and cultural cooperation between the Department of Antiquities of Jordan and the University of Florence, which combines archaeological research, conservative restoration and valorisation.

Planning the exhibition has offered the opportunity to experiment and re-elaborate the latest practises of exhibition communication, defined in Anglo-Saxon countries and, to date, inedited in Italian archaeology exhibitions, while museological design, defining the approach to exhibition communication, and conceiving a strategy for visitor learning, are all totally innovative.

The exhibition itinerary has been conceived in three sections: 1) the discovery of an authentic capital that reinterprets the Crusader presence of the Seigniory of Transjordan, and begins a succession that crosses the dynasty of Saladin and reaches us; 2) the documentation of the diverse role performed by the frontier as a historical key of interpretation: from the ancient age (Nabataean, Roman, Byzantine), Arab-Islamic (Umayyad, Abbasid, Fatimid) up to the Crusader-Ayyubid and Mameluke ages, explored through the archaeological observatory of the region and of the sites of Petra and Shawbak; 3) the collection and “publication” of visitors’ comments.

The interface design was built on the initial definition of the Information Architecture, based on the contents that the archaelogical research unit intended to deliver during the exhibition.

It appeared immediately evident that all the contents available were related to two different dimensions: the time period and the definition level.

The time span along with the fortress was studied is roughly divided in 5 parts:

  • 2nd crusade, “The coming of the Crusaders”;
  • 3rd crusade, “Rise and fall of the Crusaders”;
  • Ayyubid, “The Ayyubid conquest”;
  • Mamluk, “The rise of Mamluks”;
  • Ottoman, “The Ottoman expansion”.

The different level of resolution, or zoom detail, through which the territory can be explored are five as well: “Transjordan” region, “Shawbak” castle, “The fortified gate”, “Masonries” elevations, and “Stones”.

Contents are made of videos, pictures and texts that show and explain the archaeological site for each of the described time span and zoom level.