Poster
Acknowledging Focus Ambiguity in Visual Questions
Chongyan Chen · Yu-Yun Tseng · Zhuoheng Li · Anush Venkatesh · Danna Gurari
No existing work on visual question answering explicitly acknowledges there can be ambiguity regarding where the content described in the question is located in the image. To fill this gap, we introduce VQ-FocusAmbiguity, the first VQA dataset that visually grounds each region described in the question that is necessary to arrive at the answer. We next analyze and compare our dataset to existing datasets to reveal its unique properties. Finally, we benchmark modern models for two novel tasks related to acknowledging focus ambiguity: recognizing whether a visual question has focus ambiguity and locating all plausible focus regions within the image. Results show that the dataset is challenging for modern models. To facilitate future progress on these tasks, we publicly-share the dataset with an evaluation server at https://placeholder.github.io/.
Live content is unavailable. Log in and register to view live content