Synthetic vs. Real Training Data for Visual Navigation
CoRL 2025 Workshop on Making Sense of Data in Robotics
Lauri Suomela · Sasanka Kuruppu Arachchige · German F. Torres
Harry Edelman · Joni-Kristian Kämäräinen
Tampere University

This paper investigates how the performance of visual navigation policies trained in simulation compares to policies trained with real-world data. Performance degradation of simulator-trained policies is often significant when they are evaluated in the real world. However, despite this well-known sim-to-real gap, we demonstrate that simulator-trained policies can match the performance of their real-world-trained counterparts.
Central to our approach is a navigation policy architecture that bridges the sim-to-real appearance gap by leveraging pretrained visual representations and runs real-time on robot hardware. Evaluations on a wheeled mobile robot show that the proposed policy, when trained in simulation, outperforms its real-world-trained version by 31% and the prior state-of-the-art methods by 50% in navigation success rate. Policy generalization is verified by deploying the same model onboard a drone.
Our results highlight the importance of diverse image encoder pretraining for sim-to-real generalization, and identify on-policy learning as a key advantage of simulated training over training with real data.
BibTex
@InProceedings{suomela2025synthetic,
title={Synthetic vs. Real Training Data for Visual Navigation},
author={Suomela, Lauri and Kuruppu Arachchige, Sasanka and Torres, German F. and Edelman, Harry and Kämäräinen, Joni-Kristian}
journal={arXiv:},
year={2025}
}