TTRV: Test-Time Reinforcement Learning for Vision Language Models

1Independent Researcher 2IISc Bangalore 3JKU Linz 4Stanford 5Tübingen AI Center 6MIT-IBM Watson AI Lab 7MIT CSAIL
First research result visualization

Unlike prior methods that require pre-training splits and post-training via Supervised Finetuning (SFT) or Reinforcement Learning (RL), our approach extracts reward signals directly at test time from unlabeled data. The reward combines (1) frequency-based signals and (2) diversity control, allowing the model to adapt online and improve downstream vision performance without any labeled data. (right) Test accuracy increases while entropy of the output logits decreases, showing that the model becomes more accurate and less uncertain as test-time RL progresses. The solid lines represent the mean, and shaded regions represent the variance of results obtained over 5 independent runs. The dataset is Resics45, task is object recognition, and the model is InternVL-3-2B

Abstract

Existing methods for extracting reward signals in Reinforcement Learning typically rely on labeled data and dedicated training splits, a setup that contrasts with how humans learn directly from their environment. In this work, we propose TTRV to enhance vision–language understanding by adapting the model on-the-fly at inference time, without the need for any labeled data. Concretely, we enhance the Group Relative Policy Optimization (GRPO) framework by designing rewards based on the frequency of the base model's output, while inferring on each test sample multiple times. Further, we also propose to control the diversity of the model's output by simultaneously rewarding the model for obtaining low entropy of the output empirical distribution. Our approach delivers consistent gains across both object recognition and visual question answering (VQA), with improvements of up to 52.4% and 29.8%, respectively, and average boosts of 24.6% and 10.0% across 16 datasets. Remarkably, on image recognition, TTRV applied to Intern-VL-8B surpasses GPT-4o by an average of 2.3% over 8 benchmarks, while remaining highly competitive on VQA, demonstrating that test-time reinforcement learning can match or exceed the strongest proprietary models. Finally, we find many interesting properties of test-time RL for VLMs: for example, even in extremely data-constrained scenarios, where adaptation is performed on a single randomly chosen unlabeled test example, TTRV still yields non-trivial improvements of up to 5.5% in recognition tasks.

Method

Method Overview

Overview of TTRV. For each prompt x, the VLM generates N candidate responses. These samples induce an empirical distribution over the unique outputs, from which two reward signals are derived: (i) a frequency-based reward, where each response is rewarded in proportion to how often its output occurs among the N responses (ie its empirical probability in the distribution), and (ii) a diversity control reward, computed from the distribution to regulate diversity and encourage convergence. The final reward is the weighted combination of these terms, which is used to update the policy via GRPO.

Results

Experimental Results

BibTeX

@article{TTRV,
  title={TTRV: Test-Time Reinforcement Learning for Vision Language Models},
  author={First Author and Second Author and Third Author},
  journal={arxiv preprint},
  year={2025},
  url={https://akshit21112002.github.io/ttrvproject/}
}