Automatic Image Annotation via Label Transfer in the Semantic Space

Our work on Label Transfer in the Semantic Space has been accepted for publication on Pattern Recognition. In this work we show how we can learn a semantic space using KCCA, where correlation of visual and textual features are well preserved into a semantic embedding. Interestingly, our method work either when the training set is well annotated by experts, as well as when it is noisy such as in the case of user-generated tags in social media. Extensive testing with modern features and image labeling algorithms show the benefit on several benchmarks. At training time, we leverage the set of tags and the visual features to learn an embedding Φ(v;t) in a semantic space.

Once learned, our embedding is independent from the textual features and can then computed for any image that has to be tagged. Our method is able to reorganize the feature space to preserve image semantics, as shown in this t-SNE plot, where colors represent image labels.

Read the full paper for further details!

 

 

This entry was posted in Uncategorized. Bookmark the permalink.

Comments are closed.