Computer Vision


Participative Assistive AI-powered Tools for Supporting Trustworthy Online Activity of Citizens and Debunking Disinformation

Funded by: Horizon Europe Programme
AI4Debunk – is a four-year Innovation Action (IA) project, launched in January 2024 and funded by the European Union (EU) under the Horizon Europe Programme. The project brings together an interdisciplinary consortium of 13 partners from 8 Countries towards a shared mission: to support trustworthy online activities by providing citizens with human-centered AI-powered tools to fight disinformation. In the project, MICC will work on the definition and development of AI-based solutions for images and videos deepfake detection.

View Project


Redefining the Future of Culture Heritage, through a disruptive model of sustainability

Mar 2021- Feb 2024
Funded by: the European Union’s Horizon 2020 programme
ReInHerit (Redefining the Future of Culture Heritage, through a disruptive model of sustainability) aspires to create a model of sustainable heritage management, which will foster a digital dynamic European network of heritage stakeholders.

View Project


Innovation for Data Elaboration in Heritage and Arts

Nov 2018 – Apr 2021
Funded by: MIUR – PON | CNR
IDEHA will create an open IT platform for Cultural Heritage, combining both digital content from traditional repositories and information generated in real time by users or environmental sensors; aggregating, processing and understanding the data by utilizing new learning technologies to build services that can be used by different users profiled through specific multimodal applications.

View Project


Real time video creation according to your emotions

funded by: Regione Toscana | CYNNY S.P.A.
MORPHCAST is an innovative application made by the innovative SME CYNNY S.p.a. that personnalises video contents in real time leveraging the emotional state of the viewer. The system is able to run on both mobile and desktop platforms through a web browser. The profiling is done using information obtained only from the user’s face, such as age, gender, expressed emotions, arousal, valence, head position and 30+ features. To obtain such information, a face analysis system was implemented, together with a “full stack” of computer vision and deep learning algorithms to extract the user pose and estimate demographic information. The aim of this project is to optimize the stack to run in javascript inside of a browser.

View Project