Apple Vision Pro Reveals Serious Vulnerability, Hackers Can Steal Information Through User Eye Movement Input
Recently, a security vulnerability was exposed in Apple's Vision Pro mixed reality headset. Once successfully exploited by hackers, they can infer the specific data entered by the user on the device's virtual keyboard.
The attack campaign is named GAZEploit and the vulnerability is tracked as CVE-2024-40865.
Scholars at the University of Florida said: This is a novel attack because attackers can infer eye-related biometrics from avatar images and reconstruct text input through gaze control. The GAZEploit attack exploits the inherent vulnerability of gaze-controlled text input when users share virtual avatars.
After the vulnerability was disclosed, Apple resolved the issue in visionOS 1.3, which was released on July 29, 2024. According to Apple, the vulnerability affected a component called "Presence".
The company said in a security advisory that virtual keyboard input could be inferred from Persona, and it addressed the issue by "pausing Persona when the virtual keyboard is activated."
Researchers found that hackers can determine what the user wearing the device types on the virtual keyboard by analyzing the virtual avatar's eye movements or "gaze", which can easily lead to the user's privacy leakage.
Assuming hackers can analyze an avatar shared via a video call, online conferencing app, or live streaming platform and perform keystroke inference remotely, they could use this to extract sensitive information such as passwords typed by the user.
The attack is primarily done through supervised learning models trained on Persona recordings, eye aspect ratio (EAR), and eye gaze estimation to distinguish between typing sessions and other VR-related activities such as watching a movie or playing a game. Gaze directions on the virtual keyboard are mapped to specific keys in order to determine potential keystrokes, taking into account the keyboard's position in virtual space.
The researchers said that by remotely capturing and analyzing the avatar video, an attacker can reconstruct the keystrokes typed by the user. Currently, GAZEploit is the first known attack in the field that uses leaked gaze information to remotely perform keystroke inference.