Physical Intelligence for Robotics

This research project focuses on the development of advanced algorithms and AI models for physical intelligence. By integrating multimodal perception (via Apple Vision Pro), natural language understanding, and intelligent manipulations on the Mobile ALOHA platform, we aim to achieve autonomous and intelligent execution of complex manipulation tasks in unstructured real-world environments.

Young Robotics Team

Shengfeng Yang (PI), Jonathan Hickle, Junbin Cui, Adam Mohd shahrizan, Fabricio Giusti, Gavin Forrest, Junho Yoon

Projects

Title: “Fine-tuning Foundation Models for Robotic Manipulation: Advancing VLA Models through Domain-Specific Adaptation”; Source of Support: NSF ACCESS; Award Number: #CIS251342 (750,000 GPU Computing Credits)

Title: “Immersive Teleoperation for Mobile ALOHA: Advancing Vision-Language-Action Models through Spatial Computing”; Source of Support: Spatial Computing Hub at Purdue University; Support: Apple Vision Pro device

Title: “Leader-Free Mobile ALOHA: Gamepad-IK Teleoperation and VLA Fine-Tuning for Laboratory Manipulation”

Project Website: https://shengfeng-yang.github.io/aloha-gamepad/

Video Title

imageimage