Object-Guided Visual Tokens:
Eliciting Compositional Reasoning
in Multimodal Language Models

M. Nulli, I. Najdenkoska, M. M. Derakhshani, M. Dorkenwald, V. Orshulevich, Y. M. Asano


Motivation

Most Multimodal Large Language Models (MLLMs) use contrastively pre-trained vision encoders.
They work well on many tasks, but often struggle when it comes to compositional understanding and reasoning about what’s actually in an image.
That’s because these encoders are mainly trained for image–caption retrieval, not for truly breaking down and understanding all parts of a scene.

Another issue is efficiency: state-of-the-art vision encoders generate 2–3x more visual tokens, which slows down both training and inference.

To tackle these problems, we introduce OG-LLaVA (Object-Guided LLaVA).
With our new connector design, OG-Fusion, the model can reason about visual content more effectively—without adding lots of extra tokens or fine-tuning the vision encoder itself.

At the core of OG-Fusion is a simple but powerful idea: combine CLIP representations with segmentation masks.
This lets OG-LLaVA leverage the descriptive strength of segmentation models to better capture object relationships and spatial arrangements.

The result?
OG-LLaVA outperforms existing comparable models on tasks that demand deeper visual reasoning and grounding, all while staying efficient.

Main Process

Figure 1: OG-LLaVA architecture with OG-Fusion internal proces

We extract visual features from the input image through a Vision Encoder.
Concurrently, we pass the input image through OG-Fusion. Here we:

  1. Use a Segmentation model to retrieve the masks,
  2. Downsample the segmentations, and
  3. Apply these masks onto the visual features.
  4. Concatenated together and passed through a Multi-Layer Perceptron to produce Object-Guided Visual Tokens (OGVT).

The OGVT are then given as input to a Large Language Model together with Textual Tokens to produce an output.
The ❄️ (snowflake) and 🔥 (fire) represent modules whose parameters are kept frozen or turned on.
LoRA emphasizes that not all parameters of the LLM are unfrozen, only the LoRA layers.

Visualizations

Figure 2: OG-LLaVA vs LLaVA-1.5 on ConMe Replace-Attribute examples.
Figure 3: OG-LLaVA vs LLaVA-1.5 on MMVP examples.
Figure 4: OG-LLaVA vs LLaVA-1.5 on ConMe Replace-Relation examples.
Figure 5: OG-LLaVA vs LLaVA-1.5 on ConMe Replace-Object examples.
Figure 6: OG-LLaVA vs LLaVA-1.5 on ConMe Replace-Relation examples.

Results

Table 1: OG-LLaVA performance on Compositional Reasoning and Vision Centric tasks compared with LLaVA baselines.
Figure 7: OG-LLaVA vs Subobject Level Image Tokenization and LLaVA-1.5 on Compositional Reasoning and Vision Centric tasks.

Citation

If you use this work, please cite:

@online{nulli2025ogllava,
  author  = {Nulli, Matteo and Najdenkoska, I., and Derakhshani, M. M., and Dorkenwald, M., and Asano, Y. M.},
  title   = {Object-Guided Visual Tokens: Eliciting Compositional Reasoning in Multimodal Language Models},
  url     = {https://matteonulli.github.io/blog/2025/ogllava/},
  year    = {2025},
  urldate = {2025-09-05}
}



Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Confidently_Exiting/blogpost.md at main · joanvelja/Confidently_Exiting · GitHub