Train your robot dog in real time with Vision Pro! MIT PhD student's open source project becomes popular

Vision Pro has another hot new way to play, and this time it’s linked to embodied intelligence~

Just like this, the MIT guy used the hand tracking function of Vision Pro to successfully control the robot dog in real time.

picture

Not only actions such as opening a door can be accurately obtained:

picture

There is almost no delay.

picture

As soon as the demo came out, not only did netizens praise Goosemeizi, but also various embodied intelligence researchers became excited.

For example, this prospective doctoral student from Tsinghua University:

picture

Others boldly predict that this is how we will interact with the next generation of machines.

picture

How to implement the project, the author Younghyo Park has open sourced it on GitHub . Relevant apps can be downloaded directly from Vision Pro’s App Store.

Training robot dogs with Vision Pro

Specifically, let’s take a look at the App developed by the author’s brother—— Tracking Steamer .

As the name suggests, this application is designed to use Vision Pro to track human movements and transmit these movement data in real time to other robotic devices under the same WiFi.

The motion tracking part mainly relies on Apple's ARKit library.

The head tracking calls queryDeviceAnchor. Users can reset the head frame to its current position by pressing and holding the Digital Crown.

Wrist and finger tracking are implemented through HandTrackingProvider. It is able to track the position and orientation of the left and right wrists relative to the ground frame, as well as the posture of 25 finger joints on each hand relative to the wrist frame.

picture

In terms of network communication, this App uses gRPC as the network communication protocol to stream data. This enables data to be subscribed to more devices, including Linux, Mac and Windows devices.

In addition, in order to facilitate data transmission, the author has also prepared a Python API that allows developers to subscribe to and receive tracking data streamed from Vision Pro programmatically.

The data returned by the API is in the form of a dictionary, including the SE (3) posture information of the head, wrist, and fingers, that is, the three-dimensional position and orientation. Developers can process this data directly in Python for further analysis and control of the robot.

As many professionals have pointed out, despite the fact that the movements of the robot dog are still controlled by humans, in fact, compared to the "control" itself, combined with the imitation learning algorithm, humans are more like the robot's coach in this process .

Vision Pro provides an intuitive and simple interaction method by tracking the user's movements, allowing non-professionals to provide accurate training data for robots.

The author himself also wrote in the paper:

In the near future, people may be wearing devices like the Vision Pro like they wear glasses on a daily basis, imagine how much data we could collect from the process!

This is a promising source of data from which robots can learn how humans interact with the real world.

Finally, a reminder, if you want to try this open source project, in addition to a Vision Pro, you also need to prepare:

  • Apple Developer Account
  • Vision Pro Developer Strap ($299)
  • Mac computer with Xcode installed

Well, it seems that Apple still has to make a profit first (doge).

Project link: https://github.com/Improbable-AI/VisionProTeleop?tab=readme-ov-file