Object-Guided Visual Tokens: Eliciting Compositional Reasoning
in Multimodal Language Models
M. Nulli, I. Najdenkoska, M. M. Derakhshani, V. Orshulevich, Y. M. Asano
University of Amsterdam, eBay, University of Technology Nuremberg
Links: 📄 Long Paper | 📝 Blogpost | 🧑💻 Code
Motivation
OG-Fusion internal proces Most Multimodal Large Language Models (MLLMs) use contrastively pre-trained vision encoders.
They work well on many tasks, but often struggle when it comes to compositional understanding and reasoning about what’s actually in an image. That’s because these encoders are mainly trained for image–caption retrieval, not for truly breaking down and understanding all parts of a scene. Another issue is efficiency: state-of-the-art vision encoders generate 2–3x more visual tokens, which slows down both training and inference.
To tackle these problems, we introduce OG-LLaVA (Object-Guided LLaVA). With our new connector design, OG-Fusion, the model can reason about visual content more effectively—without adding lots of extra tokens or fine-tuning the vision encoder itself. At the core of OG-Fusion is a simple but powerful idea: combine CLIP representations with segmentation masks. This lets OG-LLaVA leverage the descriptive strength of segmentation models to better capture object relationships and spatial arrangements. The result? OG-LLaVA outperforms existing comparable models on tasks that demand deeper visual reasoning and grounding, all while staying efficient.
Underlying Procedure
As illustrated in Figure 1, we extract visual features from the input image through a Vision Encoder. Concurrently, we pass the input image through OG-Fusion. Here we:
- Use a Segmentation model to retrieve the masks,
- Downsample the segmentations, and
- Apply these masks onto the visual features.
- Concatenated together and passed through a Multi-Layer Perceptron to produce Object-Guided Visual Tokens (OGVT).
The OGVT are then given as input to a Large Language Model together with Textual Tokens to produce an output.
The ❄️ (snowflake) and 🔥 (fire) symbols in Figure 1 represent modules whose parameters are kept frozen or turned on.
LoRA emphasizes that not all parameters of the LLM are unfrozen, only the LoRA layers.
Visualizations
The images we picked cover all kinds of tricky challenges—spotting tiny details, telling apart subtle colors, reading depth cues, recognizing materials, making sense of spatial layouts, and even detecting small objects. They’re designed to push visual–language reasoning to its limits. What’s key is that these examples are tested at inference time with no extra fine-tuning, so any boost (or drop) in performance comes purely from the Object-Guided priors built into OG-LLaVA.
In Figure 4, 5 and 6 we highlight a range of cases where OG-LLaVA consistently demonstrates sharper perception and more grounded reasoning, from subtle posture cues to tricky color judgments and material recognition.
- Fine-grained human pose: correctly reads a batter’s stance, placing the bat up and behind instead of in front. (Figure 4 - Picture1)
- Precise color & reflection reasoning: rules out red in umbrellas, confining it to apples/plate, while the baseline gets misled, as well as captures realistic color reflections on materials and disambiguates different hues. (Figure 4 - Picture2), (Figure 5 - Picture8) & (Figure 6 - Picture2)
- Depth-of-field understanding: detects focus fading front-to-back instead of mistakenly left-to-right. (Figure 4 - Picture3)
- Material recognition: identifies a skate-park surface as concrete, not asphalt and glass instead of curtains. (Figure 4 - Picture4) & (Figure 5 - Picture1)
- Font recognition: picks up subtle font characteristics, showing solid OCR ability. (Figure 5 - Picture2)
- Spatial reasoning: accurately locates people and objects in complex scenes. (Figure 5 - Picture3) & (Figure 5 - Picture6)
- Object counting & detection: counts giraffes correctly where the baseline stumbles and spots a distant coffee maker amid clutter, avoiding confusion with a blender. (Figure 5 - Picture4) & (Figure 6 - Picture3)
- Fashion understanding: distinguishes between short sleeves and cap sleeves. (Figure 5 - Picture7)
- Dynamic cues: understands a distant sail is inflated with strong breeze, matching the surfing context. (Figure 6 - Picture1)
- Shape recognition: correctly identifies train tracks in the background. (Figure 6 - Picture4)
Together, these examples underline how OG-LLaVA moves beyond surface-level cues. It pays attention to fine details, adapts across diverse tasks, and reasons about entire scenes in a way that more closely reflects human understanding.
Results
Our results on compositional reasoning and vision-centric benchmarks Table 1, show that OG-LLaVA consistently outperforms its baselines, across both LLaVA-1.5 and Cambrian-1 training setups. The improvements are not marginal—they’re large and systematic.
- Compositional understanding
- ARO:
- +21% on Coco-Order (38.2 → 82.6) and +16% on Flickr-Order (49.1 → 84.0).
- Visual Genome Attribution on average +10% across backbones and on Visual Genome Relation +20% across training data and model sizes.
- ConME: steady +2% gains, peaking at 65.2 in the 8B setting (+3.6 over the strongest baseline).
- ARO:
- Vision-centric reasoning
- MMVP: about +3 points on average (e.g. 32.0 → 37.0 in 8B, 61.6 → 66.0 with Cambrian-1 data).
- CVBench: stable performance, with only ±1 point fluctuations.
In Figure 7, we compare OG-LLaVA-8B with SIT-8B, and LLaVA-1.5-8B under the same backbone. SIT-8B stands for Subobject-level Image Tokenization (SIT) a new study employing a comparable segmentation-infusion method. The results are clear: OG-LLaVA consistently outperforms SIT, with more than a 25% advantage on compositional reasoning and a 10% edge in visual grounding.
There’s also a key difference in usability. OG-LLaVA works flexibly both with and without segmentation masks at inference, while SIT requires pre-computed masks every time. This not only adds non-trivial overhead—since a separate segmentation model must run first—but also makes the system less adaptable. In practice, the reduced token count doesn’t outweigh the complexity introduced, whereas OG-LLaVA preserves efficiency without imposing such constraints.
Citation
If you use this work, please cite:
@misc{nulli2025ogllava,
author = {Nulli, M. and Najdenkoska, I., and Derakhshani, M. M., and Dorkenwald, M., Orshulevich, V., and Asano, Y. M.},
title = {Object-Guided Visual Tokens: Eliciting Compositional Reasoning in Multimodal Language Models},
howpublished = {https://matteonulli.github.io/blog/2025/ogllava/},
year = {2025},
note = {Accessed: 2025-09-05}
}
Enjoy Reading This Article?
Here are some more articles you might like to read next: