MNEMOSYNE Personalized Museum Visits exploiting interest profiles.

/

2012-2015
supported by: Regione Toscana
The amount of multimedia data museums gather in their databases is growing fast, while the capacity to display more information to visitors is limited.

Such information often targets the interests of average visitors instead of the whole spectrum of different interests each individual visitor could have.

MNEMOSYNE attempts to address these issues through a new multimedia museum experience. The system builds a user profile for each visitor used to drive an interactive table to personalize the multimedia content delivery of the available resources.

The MNEMOSYNE system grounds on state-of-the-art Detection and Re-identification technology to individuate persons in multiple camera views in the proximity of the artworks during a museum visit.

This allows to create visitor profiles and provides each visitor with a personalized bunch of information about the artworks that attracted her/him the most.

Detection exploits:

  • a learned model of the geometry of the scene that provides a limited number of image windows where there is high probability to find a person and a classifier that assesses the presence of a person in the windows.

Re-identification exploits:

  • an effective target representation that divides the person image into horizontal stripes in order to capture information about vertical color distribution and color correlation between adjacent stripes;
  • a smart and highly performing Re-identification method to rank the detections properly.
The interest profiling displayed by the artworks green level
The interest profiling displayed by the artworks green level

The MNEMOSYNE system operates in real-time and provides personalized visitor’s information on a tabletop device at the exit of the Donatello hall in the Bargello National museum in Florence.

The MNEMOSYNE prototype uses two different solutions to provide recommendations to the users: a knowledge-based and an experience-based system. These modules have been developed as web servlets which expose the recommendation web services accessible via a Representational State Transfer (REST) interface.

The tabletop display is a large touchscreen device (55 inch display with Full HD resolution) and is placed horizontally on a customized table-shaped structure. When the passive profiling system detects a visitor approaching the table, it sends the interest profile to the user interface software which then exchanges data with the recommendation system, in order to load all the multimedia content that will be displayed for this user.

The tabletop device in the Donatello Hall shows the visitor’s information
The tabletop device in the Donatello Hall shows the visitor’s information

The metaphor proposed for the user interface is based on the idea of an hidden museum waiting to be unveiled, starting from the top (the physical artworks) and moving deeper towards additional resources such as explanations and relations between one artwork and others.

The proposed metaphor aims at hiding the complexity of the data extracted by the recommendation and passive profiling systems by letting users make more limited and simpler actions in deciding contents to consume and interact with.

The artworks level visualizes digital representations of the physical artworks for which the visitor has shown the highest level of interest, see figure 6a, based on the data created by the passive profiling system. A vertical animation starts when the user touches an artwork item, in order to move the point of view under the current space and reveal the level including the related resources to this artwork.

The related resources level represents a horizontally arranged space in which the visitor can navigate through the multimedia content related to the selected artwork. Related resources are organized in three different spaces (insights, recommendations and social), which can be explained as follows:

  • insights: stories directly related to the artwork in the ontology;
  • recommendations: resources related to the artwork and its related stories in the ontology according to the knowledge-based recommendation system;
  • social: similar artworks according to the experience recommendation system using the visitor profile.
PUBLICATIONS:
S. Karaman, A. D. Bagdanov, G. D’Amico, L. Landucci, A. Ferracani, D. Pezzatini, and A. Del Bimbo, “Passive Profiling and Natural Interaction Metaphors for Personalized Multimedia Museum Experiences,” in MM4CH’13 – New Trends in Image Analysis and Processing — ICIAP 2013, Springer, pp. 247-256. Naples, Italy, 2013. Oral PresentationS. Karaman and A. D. Bagdanov, “Identity Inference: Generalizing Person Re-identification Scenarios,” in Computer Vision – ECCV 2012. Workshops and Demonstrations, A. Fusiello, V. Murino, and R. Cucchiara, Eds., Springer Berlin Heidelberg, vol. 7583, pp. 443-452. Firenze, Italy, 2012. Oral Presentation. Best Paper AwardS. Karaman, G. Lisanti, A. D. Bagdanov, and A. Del Bimbo, “From Re-identification to Identity Inference: Labeling Consistency by Local Similarity Constraints,” in Person Re-Identification, S. Gong, M. Cristani, S. Yan, and C. C. Loy, Eds., Springer London, pp. 287-307. 2014 – pdf
S. Karaman, G. Lisanti, A. D. Bagdanov, and A. D. Bimbo, “Leveraging local neighborhood topology for large scale person re-identification,” Pattern Recognition, vol. 47, iss. 12, pp. 3767-3778, 2014A. D. Bagdanov, A. Del Bimbo, D. Di Fina, S. Karaman, G. Lisanti, and I. Masi, “Multi-Target Data Association using Sparse Reconstruction,” in Proc. of International Conference on Image Analysis and Processing (ICIAP), pp. 239-248. Naples, Italy, 2013. PosterF. Bartoli, G. Lisanti, S. Karaman, A. D. Bagdanov, and A. Del Bimbo, “Unsupervised scene adaptation for faster multi-scale pedestrian detection,” in 22nd International Conference on Pattern Recognition (ICPR), Stockholm, Sweden, 2014. Oral presentationS. Karaman, A. Bagdanov, L. Landucci, G. D’Amico, A. Ferracani, D. Pezzatini, and A. Del Bimbo, “Personalized multimedia content delivery on an interactive table by passive observation of museum visitors,” Multimedia Tools and Applications, pp. 1-25, 2014

F. Bartol, G. Lisanti, L. Seidenari, S. Karaman and A. Del Bimbo,
“MuseumVisitors: a dataset for pedestrian and group detection, gaze estimation and behavior understanding”, in Proc. of CVPR Int’l. Workshop on Int. Workshop on Group And Crowd Behavior Analysis And Understanding, Boston, 2015. Workshop

VIDEO: https://vimeo.com/132819029

LOCATION:
The MNEMOSYNE system was installed at the Bargello National Museum in Florence in February 2015.
https://www.google.com/maps/place/Museo+Nazionale+del+Bargello/@43.7703981,11.2580078,15z/data=!4m2!3m1!1s0x0:0x213c9b3a845e25ec?sa=X&ved=0ahUKEwjhv5eK0b_JAhWLWRQKHZNwBu0Q_BIIZjAK