Zexiang Guo1*, Hengxiang Chen1*, Xinheng Mai1*, Qiusang Qiu1, Gan Ma2, Zhanat Kappassov3, Qiang Li1†, and Nutan Chen4
1 School of Artificial Intelligence, Shenzhen Technology University, China
2 Sino-German College of Intelligent Manufacturing, Shenzhen Technology University, China
3 Robotics Department, Institute of Smart Systems and Artificial Intelligence (ISSAI), Nazarbayev University, Kazakhstan
4 Foundation Robotics, Germany
*These authors contributed equally to this work. †Corresponding author
Oral & Poster presentation in IROS 2025 Workshop 🎉
Inferring physical properties can significantly enhance robotic manipulation by enabling robots to handle objects safely and efficiently through adaptive grasping strategies. Previous approaches have typically relied on either tactile or visual data, limiting their ability to fully capture properties. We introduce a novel cross-modal perception framework that integrates visual observations with tactile representations within a multimodal vision-language model. Our physical reasoning framework, which employs a hierarchical feature alignment mechanism and a refined prompting strategy, enables our model to make property-specific predictions that strongly correlate with ground-truth measurements. Evaluated on 35 diverse objects, our approach outperforms existing baselines and demonstrates strong zero-shot generalization.
This research is supported by "Natural Science Foundation of Top Talent of SZTU" (Grant No. GDRC202411)
@inproceedings{chen2025octopi-x,
title = {Octopi-X: Robotic Perception with a Large Tactile-Vision-Language Model for Physical Property Inference},
author = {Zexiang Guo, Hengxiang Chen, Xinheng Mai, Qiusang Qiu, Gan Ma, Zhanat Kappassov, Qiang Li, and Nutan Chen},
booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Workshop},
year = {2025},
url = {https://openreview.net/forum?id=fx00MuY7RW}
}
For inquiries, please contact:
zexiangguo@sztu.edu.cn
hengxiangchen@sztu.edu.cn