Matteo Nulli

Machine Learning Research & Engineering

Currently at eBay — Foundation Models Team, building scalable multimodal models and efficient optimization pipelines for real-world applications.
Formerly at UvA, UTN, Bocconi, USYD.

Matteo Nulli

about

👋 Hi, I am Matteo, an Applied ML Researcher at eBay with the Foundation Models Team. My work focuses on research in multimodal learning, inference optimisation and multimodal search reasoning with Hadi Hashemi and Shahram Khadivi.

💭 My current research interests lie in Efficient Multimodal Modeling. In the recent past, I focused on visual representation learning (here), adaptable VLM Architectures (here) and optimisation pipelines (here)

🔙 Previously, I graduated cum-laude ELLIS Honours from the MSc in Artificial Intelligence at the University of Amsterdam. I was lucky to be suprevised by Prof. Yuki Asano at FunAI Lab, Dr. Ivona Najdenkoska and Dr. Mohammad Mahdi Derakhshani to investigate Compositional Reasoning in Multimodal Foundation Models. I also spent time at eBay as Research Intern, supervised by Prof. Cees Snoek, working with Dr. Hadi Hashemi and Vladimir Orshulevich. Before that, I graduated from Bocconi University with a BSc in Mathematical and Computing Sciences for AI and was an exchange student in Applied Mathematics and Computer Science at the University of Sydney with a full-ride scholarship.

📮 For collaborations reach out at matteo[dot]nulli[at]outlook[dot]com.

news

Feb 10, 2026 📝 Our paper “Adapting Vision-Language Models for E-commerce Understanding at Scale” was accepted as Oral at EACL Industry Track, see you in Morocco! 🇲🇦
Jan 29, 2026 📝 MLSys accepted our paper “Meeting SLOs, Slashing Hours: Automated Enterprise LLM Optimization with OptiKIT” for the Industry Track!
Nov 28, 2025 📝 Going to EurIPS! Our paper Object-Guided Visual Tokens: Eliciting Compositional Reasoning in Multimodal Language Models was just accepted to EurIPS Principles of Generative Modelling Workshop, see you in Copenhagen. 🇩🇰
Oct 29, 2025 Happy to share I just graduated Cum-Laude and ELLIS Honours the Master of Science in Artificial Intelligence at University of Amsterdam! 🎓
Aug 18, 2025 Just started working as full time Applied Researcher at eBay Foundation Models Team with Hadi and Shahram. Excited to (re-)start. 👾

latest blogpost

Demystifying Multimodal Learning: The Hidden Inefficiency in Vision Language Modelling Demystifying Multimodal Learning: The Hidden Inefficiency in Vision Language Modelling
A blogpost series on the nuts and bolts of Multimodal Learning

selected publications

  1. Adapting Vision-Language Models for E-commerce Understanding at Scale
    Matteo Nulli, Orshulevich Vladimir, Tala Bazazo, Christian Herold, Michael Kozielski, Marcin Mazur, Szymon Tuzel, Cees G. M. Snoek, Seyyed Hadi Hashemi, Omar Javed, Yannick Versley, and Shahram Khadivi
    In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 5: Industry Track) , Mar 2026
    Oral
  2. Meeting SLOs, Slashing Hours: Automated Enterprise LLM Optimization with OptiKIT
    Nicholas Santavas, Kareem Eissa, Patrycja Cieplicka, Piotr Florek, Matteo Nulli, Stefan Vasilev, Seyyed Hadi Hashemi, Antonios Gasteratos, and Shahram Khadivi
    In MLSys 2026 Industry Track , Mar 2026
  3. Object-Guided Visual Tokens: Eliciting Compositional Reasoning in Multimodal Language Models
    In EurIPS 2025 Workshop on Principles of Generative Modeling (PriGM) , Mar 2025
  4. Dynamic Vocabulary Pruning in Early-Exit LLMs
    Jort Vincenti*Karim Abdel Sadek*Joan Velja*Matteo Nulli*, and Metod Jazbec
    NeurIPS Efficient Natural Language and Speech Processing, Mar 2024
  5. In-Context Learning Improves Compositional Understanding of Vision-Language Models
    Matteo Nulli, Anesa Ibrahimi, Avik Pal, Hoshe Lee, and Ivona Najdenkoska
    In ICML 2024 Workshop on Foundation Models in the Wild , Mar 2024
  6. ’Explaining RL Decisions with Trajectories’: A Reproducibility Study
    Karim Abdel Sadek*Matteo Nulli*Joan Velja*, and Jort Vincenti*
    Transactions on Machine Learning Research, Mar 2024