This repository contains model weights for LitePT: Lighter Yet Stronger Point Transformer, a lightweight, high-performance 3D point cloud architecture.

LitePT embodies the simple principle "convolutions for low-level geometry, attention for high-level relations" and strategically places only the required operations at each hierarchy level. LitePT is equipped with a novel, parameter-free 3D positional encoding, PointROPE. The resulting model achieves state-of-the-art performance while being significantly more efficient.

Paper & Resources

Models

We release the pretrained model weights for the benchmarks we reported in our paper.

Semantic segmentation

Model Params Benchmark val mIoU Config Checkpoint
LitePT-S 12.7M NuScenes 82.2 link Download
LitePT-S 12.7M Waymo 73.1 link Download
LitePT-S 12.7M ScanNet 76.5 link Download
LitePT-S 12.7M Structured3D 83.6 link Download
LitePT-B 45.1M Structured3D 85.1 link Download
LitePT-L 85.9M Structured3D 85.4 link Download

Instance segmentation

Model Params Benchmark mAP25 mAP50 mAP Config Checkpoint
LitePT-S* 16.0M ScanNet 78.5 64.9 41.7 link Download
LitePT-S* 16.0M ScanNet200 40.3 33.1 22.2 link Download

Object detection

Model Params Benchmark mAPH Config Checkpoint
LitePT 9.0M Waymo 70.7 link Download

Citation

@article{yuelitept2025,
    title={{LitePT: Lighter Yet Stronger Point Transformer}},
    author={Yue, Yuanwen and Robert, Damien and Wang, Jianyuan and Hong, Sunghwan and Wegner, Jan Dirk and Rupprecht, Christian and Schindler, Konrad},
    journal={arXiv preprint arXiv:2512.13689},
    year={2025}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support