Physical Intelligence for Robotics

This research project focuses on the development of advanced algorithms and models for physical intelligence. By integrating multimodal perception (via Apple Vision Pro), natural language understanding, and intelligent manipulations on the Mobile ALOHA platform, we aim to achieve autonomous and intelligent execution of manipulation tasks in unstructured real-world environments.

Young Robotics Team

Shengfeng Yang (PI), Jonathan Hickle, Junbin Cui, Adam Mohd shahrizan, Fabricio Giusti, Gavin Forrest, Junho Yoon

Projects

Title: “Fine-tuning Foundation Models for Robotic Manipulation: Advancing VLA Models through Domain-Specific Adaptation”; Source of Support: NSF ACCESS; Award Number: #CIS251342 (750,000 Computing Credits); 12/2025-12/2026

Title: “Immersive Teleoperation for Mobile ALOHA: Advancing Vision-Language-Action Models through Spatial Computing”; Source of Support: Spatial Computing Hub at Purdue University; Support: Apple Vision Pro device; 12/2025-12/2026

imageimage