Volume Transformer: Revisiting Vanilla Transformers for 3D Scene Understanding

1RWTH Aachen University 2Eindhoven University of Technology
* Equal contribution
Volt architecture
Volt architecture. The input 3D scene is partitioned into non-overlapping volumetric patches, and each patch is embedded into a token with a linear tokenizer. The resulting token sequence is processed by a Transformer encoder with global attention. The latent tokens are then upsampled back to the voxel resolution with a single transposed convolution and mapped to semantic predictions by a linear classification head.

Abstract

Transformers have become a common foundation across deep learning, yet 3D scene understanding still relies on specialized backbones with strong domain priors. This keeps the field isolated from the broader Transformer ecosystem, limiting the transfer of new advances as well as the benefits of increasingly optimized software and hardware stacks. To bridge this gap, we adapt the vanilla Transformer encoder to 3D scenes with minimal modifications. Given an input 3D scene, we partition it into volumetric patch tokens, process them with full global self-attention, and inject positional information via a 3D extension of rotary positional embeddings. We call the resulting model the Volume Transformer (Volt) and apply it to 3D semantic segmentation. Naively training Volt on standard 3D benchmarks leads to shortcut learning, highlighting the limited scale of current 3D supervision. To overcome this, we introduce a data-efficient training recipe based on strong 3D augmentations, regularization, and distillation from a convolutional teacher, making Volt competitive with state-of-the-art methods. We then scale supervision through joint training on multiple datasets and show that Volt benefits more from increased scale than domain-specific 3D backbones, achieving state-of-the-art results across indoor and outdoor datasets. Finally, when used as a drop-in backbone in a standard 3D instance segmentation pipeline, Volt again sets a new state of the art, highlighting its potential as a simple, scalable, general-purpose backbone for 3D scene understanding.

Training Volt

Training Volt table
Full global self-attention gives Transformers the capacity to model long-range interactions across an entire 3D scene, but it also makes them data-hungry. In 3D, supervision is comparatively scarce, and we find that naively training Volt on standard 3D benchmarks leads to pronounced overfitting. To address this, we propose a training recipe inspired by data-efficient training strategies in 2D vision, combining strong data augmentation, model regularization, and distillation to enable effective training of Volt from scratch. Finally, we scale supervision via multi-dataset training to study the scaling behavior of Volt.

Scaling Behaviour of 3D backbones

Scaling Volt table
Under naive training, Volt performs poorly in the low-data regime but improves rapidly as the training set grows, revealing substantial headroom with scale. Applying data-efficient training recipe significantly improves performance, making Volt-S consistently better than the 5 times larger PTv3-B model across all data scales. At the largest supervision scale, improvements for Volt-S begin to saturate, suggesting performance becomes capacity-limited with 23.7M parameters. Scaling the backbone to Volt-B alleviates this saturation: Volt-B continues to benefit from additional supervision and opens a clear gap under multi-dataset training. Overall, these results suggest that, as supervision increases, hand-crafted architectural priors become less critical and may even introduce unnecessary overhead, while Volt can instead learn domain-specific 3D patterns directly from data.

Volt Quantitative Results

Table 1
Table 2
Table 3

BibTeX

@article{yilmaz2026Volt,
  title     = {{Volume Transformer: Revisiting Vanilla Transformers for 3D Scene Understanding}},
  author    = {Yilmaz, Kadir and Kruse, Adrian and Höfer, Tristan and de Geus, Daan and Leibe, Bastian},
  journal   = {arXiv preprint arXiv:2604.19609},
  year      = {2026}
}