TANGerINE Tales is a solution for multi-role digital storymaking based on the TANGerINE platform. The goal is to create a digital interactive system for children able to stimulate collaboration between users. The result concerns educational psychology in terms of respect of roles, development of literacy and of narrative skills.
Testing Tangerine Tales
TANGerINE Tales lets children create and tell stories combining landscapes and characters chosen by themselves. Initially, children select the elements that will be part of the game and explore the environment within which they will create their own story. After that they have the chance to record their voice and the dynamics of the game. Finally, they are able to replay the self-made story on the interactive table.
The interaction between the system and users is performed through the tangible interface TANGerINE, consisting of two smart cubes (one for each child) and an interactive table. Users interact with the system through the manipulation of cubes that send data to the computer via a Bluetooth connection.
The main assumption is that the interaction takes place through the collaboration between two children who have different roles: one of them will actively interact to control the actions of the main character of the story, while the other will control the environmental events in response to the movements and actions of the character.
The target user of TANGerINE Tales is made up of 7-8 year olds, attending the third year of elementary school. This choice was made following research studies on psychological methods for collaborative learning, on Human Computer Interaction and tangible interfaces; we exploited the guidelines for learning supported by technological tools (computers, cell phones, tablet PCs, etc..) and those extrapolated by projects of storytelling for children.
This project concerns the design and development of a multi-touch system that provides innovative tools for neurocognitive and neuromotor rehabilitation for senile diseases. This project comes to life thanks to the collaboration between MICC, the Faculty of Psychology (University of Florence) and Montedomini A.S.P., a public agency for self sufficient and disabled elders that offers welfare and health care services.
A session of rehabilitation at Montedomini
The idea behind this project is to apply high-tech interactive devices to standard medical procedures used to rehabilitate desease patients with neurocognitive and neuromotor deficits. This new approach can offer new rehabilitative paths concerning digital training activities which means an advance upon conventional “pen and paper” approach.
Natural surface for neurocognitive and neuromotor rehabilitation
Such digital exercises will focus on:
difficulties in executive functions
This new training tools based on interactive tables will be able to increase the stimulation of the patiens neuroplastic abilities. Our new rehabilitative paths, in fact, will provide:
audio-visual feedback for performance monitoring;
different difficulty degrees that can be graduated by the medical staff in relation to every single different patient through several parameters (e.g. response speed, exposure time of a stimulus, spatial distribution of stimuli, sensory channels involved, audiovisual tasks, number of stimuli to control and so on).
Innovative interactive surfaces will support the manipulation of digital contens on medium-large screens letting patiens and medical trainers interact through natural gestures for select, drag and zoom graphic objects. The interactive system will be even able to misure the activities of users storing the results of every rihabilitative session: in this way it is possible to provide a personal profile for every patient. Moreover, thanks to the collaborative nature of the system, we will introduce new training modalities which involve medical trainers and patients at the same time.
This research project exploits new technologies (multi-touch table and iPhone) in order to develop a multi-user, multi-role and multi-modal system for multimedia content search, annotation and organization. As use case we considered the field of broadcast journalism where editors and archivists work together in creating a film report using archive footage.
Multi user environment for semantic search of multimedia contents
The idea behind this work-in-progress project is to create a multi-touch system that allows one or more users to search multimedia content, especially video, exploiting an ontology based structure for the knowledge management. Such system exploits a collaborative multi-role, multi-user and multi-modal interaction of two users performing different tasks within the application.
The first user plays the role of an archivist: by inserting a keyword through the iPhone, he is able to search and select data through an ontological structured interface designed ad-hoc for multi-touch table. At this stage the user can organize their results in folders and subfolders: the iPhone is therefore used as a device for text input and for folders storage.
The other user performs the role of an editor: he receives the results of the search carried out by the archivist through the system or the iPhone. This user examines the contents of the video search and select those that are most suitable for the final result, estimating how much the video is appropriate for his purposes (assessment for the current work session) and giving his opinion on the objective quality of the video (subjective assessment that can also influence future research). In addition, the user also plays the role of an annotator: he can add more tags to the video if he considers them necessary to retrieve that content in future research.