Losing Visual Needles in Image Haystacks: Vision Language Models are Easily Distracted in Short and Long Contexts

University of California, Santa Barbara
*Equal contribution

The impact of visual context on vision-language models (VLMs) in our modified, multi-image versions of the OK-VQA, MMStar, and MMBench evaluation benchmarks. Distractor images around the target image increase the visual context length needed to answer the questions. VLM performance exhibits an exponential decay against distractor count, evident in both single composed (cmp) and multiple interleaved (int) input image configurations.

Abstract

We present LoCoVQA, a dynamic benchmark generator for evaluating long-context reasoning in vision language models (VLMs). LoCoVQA augments test examples for mathematical reasoning, VQA, and character recognition tasks with increasingly long visual contexts composed of both in-distribution and out-of-distribution distractor images.

Across these tasks, a diverse set of VLMs rapidly lose performance as the visual context length grows, often exhibiting a striking exponential decay trend. This test assesses how well VLMs can ignore irrelevant information when answering queries—a task that is quite easy for language models (LMs) in the text domain—demonstrating that current state-of-the-art VLMs lack this essential capability for many long-context applications.