Recent News

In collaboration with my colleagues in Barcelona, we will present a paper on self-supervised learning for crowd counting at the 2018 Conference on Computer Vision (CVPR), in Salt Lake City:

Xialei Liu, Joost van de Weijer, Andrew D. Bagdanov, “Leveraging Unlabeled Data for Crowd Counting by Learning to Rank.” In: Proceedings of CVPR 2018 (to appear).

CONTINUE READING

In collaboration with my colleagues in Barcelona, we will present two papers at the upcoming International Conference on Computer Vision in Venice:

M. Masana, J. van de Weijer, L. Herranz, A. D. Bagdanov, Jose M. Alvarez, “Domain-adaptive deep network compression.” In: Proceedings of ICCV 2017 (to appear).

Xialei Liu, Joost van de Weijer and Andrew D. Bagdanov, “RankIQA: Learning from Rankings for No-reference Image Quality Assessment.” In: Proceedings of ICCV 2017 (to appear).

CONTINUE READING

Recent Publications

We propose a novel crowd counting approach that leverages abundantly available unlabeled crowd imagery in a learning-to-rank framework. To induce a ranking of cropped images , we use the observation that any sub-image of a crowded scene image is guaranteed to contain the same number or fewer persons than the super-image. This allows us to address the problem of limited size of existing datasets for crowd counting.
In CVPR2018

Deep Neural Networks trained on large datasets can be easily transferred to new domains with far fewer labeled examples by a process called fine-tuning. This has the advantage that representations learned in the large source domain can be exploited on smaller target domains. However, networks designed to be optimal for the source task are often prohibitively large for the target task. This is especially problematic when the network is to be used in applications with limited memory and energy requirements. In this work we address the compression of networks after domain transfer.
In ICCV2017

We propose a no-reference image quality assessment (NR-IQA) approach that learns from rankings (RankIQA). To address the problem of limited IQA dataset size, we train a Siamese Network to rank images in terms of image quality by using synthetically generated distortions for which relative image quality is known. These ranked image sets can be automatically generated without laborious human labeling. We then use fine-tuning to transfer the knowledge represented in the trained Siamese Network to a traditional CNN that estimates absolute image quality from single images.
In ICCV2017

To facilitate comparative evaluation independently of person detection, the standard evaluation protocol for action recognition uses an oracle person detector to obtain perfect bounding box information at both training and test time. Motivated by the observation that body pose is strongly conditioned on action class, we show that: 1) the existing state-of-the-art generic person detectors are not adequate for proposing candidate bounding boxes for action classification; 2) due to limited training examples, the direct training of actionspecific person detectors is also inadequate; and 3) using only a small number of labeled action examples, the transfer learning is able to adapt an existing detector to propose higher quality bounding boxes for subsequent action classification.
In IEEE TIP

In this paper we introduce a method for person re-identification based on discriminative, sparse basis expansions of targets in terms of a labeled gallery of known individuals. We propose an iterative extension to sparse discriminative classifiers capable of ranking many candidate targets. The approach makes use of soft- and hard- re-weighting to redistribute energy among the most relevant contributing elements and to ensure that the best candidates are ranked at each iteration. Our approach also leverages a novel visual descriptor which we show to be discriminative while remaining robust to pose and illumination variations.
In IEEE TPAMI

Contact

  • andrew.bagdanov@unifi.it
  • Via di Santa Marta 4, Firenze (Room 540)
  • Wednesdays 11:00 to 13:00 (Santa Marta), Thursdays 15:00 to 16:00 (MICC), or email for appointment