In this paper, we describe the participation of CNRS TELECOM ParisTech in the ImageCLEF 2013 Scalable Concept Image Annotation challenge. This edition promotes the use of many contextual cues attached to visual contents. Image collections are supplied with visual features as well as tags taken from different sources (web pages, etc.). Our framework is based on training support vector machines (SVMs) using a class of kernels referred to as context dependent. These kernels are designed by minimizing objective functions mixing visual features and their contextual cues resulting from surrounding tags. The results clearly corroborate the complementarity of tags and visual features and the effectiveness of these context dependent SVMs for image annotation.
Download Full PDF Version (Non-Commercial Use)