Using knowledge and selecting labels in DL: 2 papers accepted
Our two papers got accepted at International Conference on Image Analysis and Processing.
One paper is about Visual Question Answering (VQA). We consider image interpretation by asking textual questions. We extended VQA with knowledge (taxonomy), called Guided-VQA, to enable coarse-to-fine questions. This research is part of the Appl.AI SNOW project.
The other paper is about DARPA Learning with Less Labels. We propose to select particular images for labeling objects. We show that such selection is better for object detection when having very few labels. We consider only 1-10 labels per class, while standard is to have hundreds per class. For many practical applications few labels are available.
Soon more information follows.