Microsoft Research - Towards Safer Augmented Reality: Identifying, Evaluating, and Mitigating Security & Privacy Threats
Ki Chang, a PhD candidate from the University of Washington, discusses his research on the security and privacy of augmented reality (AR) systems. He highlights the rapid adoption of AR technologies, particularly glass frame devices, and their advanced features like live translation and scene understanding. However, these advancements also introduce new security and privacy risks, such as cognitive attacks and privacy breaches through facial recognition. Chang's research aims to develop a comprehensive protection framework to address these evolving threats.
He presents findings from his studies on eye tracking and hand tracking permissions in AR devices, revealing differences in privacy practices among major platforms like Oculus, Microsoft HoloLens, and Apple Vision Pro. His research shows that while some platforms offer more privacy-preserving mechanisms, users often lack understanding of these protections. Chang suggests improvements such as opt-in features for data sharing and clearer communication of privacy measures. Additionally, he explores UI security in AR, identifying vulnerabilities like clickjacking and synthetic input attacks, and recommends strategies to mitigate these risks. His work emphasizes the need for ongoing research and collaboration to address the complex challenges in AR security and privacy.
Key Points:
- AR technologies offer advanced features but pose new security and privacy risks.
- Eye and hand tracking in AR devices have varying privacy practices across platforms.
- Users often misunderstand privacy protections, highlighting the need for clearer communication.
- Vulnerabilities like clickjacking and synthetic input attacks exist in AR interfaces.
- Ongoing research and collaboration are crucial to address AR security and privacy challenges.
Details:
1. 🎓 Introducing Ki Chang and His AR Work
- Ki Chang is a PhD student at the University of Washington, focusing on the intersection of augmented reality (AR) and safety.
- His research specifically addresses security, privacy, and safety concerns in AR technologies, highlighting potential risks such as data breaches and user privacy violations.
- He is working on developing comprehensive strategies to mitigate these risks, aiming to enhance the overall safety and reliability of AR systems.
- Ki Chang's work represents a significant academic commitment to ensuring safe and secure use of emerging AR technologies, potentially influencing future industry standards.
2. 🌐 Joining Meta and Future Work
- An expert in augmented reality safety and security is joining the Meta PyTorch Edge team.
- The new role will focus on applying expertise to Meta's Rayban AI and smart wearable devices.
- The expert will enhance security protocols and integrate advanced AI features into Meta's wearable technology.
- This role is strategically significant for Meta's expansion in the smart wearables market, aiming to improve device functionality and user safety.
- The transition represents a commitment to bolstering Meta's technological capabilities and market position through innovative applications.
3. 🔍 Research Focus on AR Security and Privacy
- Kaiming is a final year PhD candidate at the University of Washington, advised by Francisco Rosner and Tadoshi Kono.
- His research is focused on the security and privacy of augmented reality (AR).
- The presentation is taking place at Microsoft, indicating potential industry collaboration or interest.
- Specific research projects include developing secure AR frameworks to protect user data and privacy.
- Investigating the impact of AR on user perception and potential security vulnerabilities.
- Exploring collaborations with tech companies to implement research findings in real-world applications.
4. 📈 Evolution and Benefits of AR Technology
4.1. Historical Context and Recent Advancements in AR Technology
4.2. Impact of AR Technology on Industries
5. 🚨 Security and Privacy Risks in AR
5.1. Introduction to AR Security Risks
5.2. Research and Cognitive Attacks
6. 🔑 Understanding AR Threats and Permissions
- Two college students successfully demonstrated how off-the-shelf AR glasses could be combined with facial recognition technology to identify individuals in public, highlighting significant privacy issues.
- The demonstration underscores the urgent need for robust security measures and frameworks to address both present and emerging threats in AR technology.
- Current industry mitigations include data encryption and restricted access policies, but these may not be sufficient to counteract the full spectrum of AR-related risks.
- There is a pressing need for the industry to anticipate future threats and develop comprehensive strategies that ensure user privacy and data protection as AR adoption grows.
7. 👁️🗨️ Eye and Hand Tracking in AR
7.1. Privacy and Security Implications of Eye and Hand Tracking
7.2. User Interaction and Vulnerabilities in AR
8. 🛡️ User Security and Privacy in AR
- Today's AR headsets are equipped with sophisticated sensors for hand tracking and eye tracking, introducing both exciting opportunities and new privacy threats.
- By combining eye tracking and hand tracking, users can navigate AR environments using eye movements and hand gestures, enhancing immersive experiences.
- Eye tracking data can improve avatar realism in virtual settings and optimize system functions like rendering and power consumption.
- Research highlights privacy concerns associated with eye tracking and hand tracking data, despite their potential benefits.
9. 📋 Methodology and Survey Findings
- AR devices capture data that can reveal sensitive user attributes, such as identity and interest levels.
- It is crucial to understand the permission design space with new sensing technologies as millions of users start using AR devices for the first time.
- Research questions focus on the current technical landscape of eye and hand-tracking permissions in AR platforms, user feelings about permission flows, comprehension of permission details, capabilities, privacy risks, and factors affecting AR technology adoption.
- The study finds that everything is processed on the device without using the cloud, with different platforms having varying processing methods.
- Methodology includes structured brainstorming to identify relevant properties of eye and hand-tracking, followed by evaluation of each property.
- The survey reveals that understanding and managing permissions is critical to user trust and adoption of AR technologies.
- Participants expressed concerns about privacy risks, emphasizing the need for transparent and user-friendly permission flows.
- Detailed insights into user comprehension and the impact of permission settings on privacy perceptions were gathered.
- The evaluation of eye and hand-tracking technologies highlights the need for standardized permission settings across platforms.
10. 🔍 Analyzing AR Platforms' Design Choices
10.1. Eyetracking Permissions and Privacy
10.2. Handtracking Permissions and Data Management
10.3. User Perception and Experience
11. 🤔 Privacy Concerns with AR Data Usage
- HoloLens and Vision Pro allow app access to hand tracking data without explicit user permission due to built-in hand tracking capabilities, unlike Oculus which requires additional permissions. This raises privacy concerns as users may not be aware of data collection.
- An API provides abstracted hand tracking data, representing hand joint movements in different axes, essential for app interaction. However, the abstraction may obscure the amount and type of data being collected.
- Apps can access hand tracking data in the background if user settings allow it, enabling functionalities like gesture recognition (e.g., pinch, point) and metrics such as hand direction, length, and velocity. This background access could lead to unauthorized data usage if not properly managed.
12. 📱 Survey on User Perception and Comprehension
12.1. Potential Privacy Risks
12.2. Survey Design and Findings
13. 📝 Recommendations for AR Platforms
13.1. User Preferences for AR Platforms
13.2. Data Privacy and User Comprehension
14. 🔒 Apple's Privacy Measures in AR
- Apple ensures that eye tracking data is not accessible to applications, developers, or even Apple itself, enhancing privacy by keeping data processing local to the device.
- Recommends implementing opt-in and opt-out features for user data sharing preferences to increase user control over personal data.
- Encourages Apple to clearly explain their privacy-preserving mechanisms to improve user understanding and comfort with the technology.
- Oculus and HoloLens protect privacy by abstracting eye-tracking data, but studies indicate even abstracted data can pose privacy risks, suggesting a need for increased privacy guarantees.
15. 🎭 Balancing Utility and Privacy in AR
- AR technology, while useful for applications like health assessments via eye-tracking, poses significant privacy risks by potentially exposing sensitive user data.
- There is a critical concern regarding the privacy of data collected through AR glasses, as even anonymized data can infer personal interests.
- To mitigate these risks, stronger privacy protections for eye-tracking and other sensitive data are recommended.
- Research suggests that even without direct data access, side-channel attacks can deduce user attention, underscoring persistent privacy challenges.
- Apple's strict privacy protocols can restrict developers' creative freedom, impacting the diversity of AR applications.
- The balance between privacy and utility is complex, requiring innovative permission models and threat analysis to address.
- Different AR platforms implement varied privacy measures, and ongoing studies are examining user responses to these approaches.
- Examples of privacy breaches could include unauthorized data access through AR applications, highlighting the need for robust security measures.
- The implications of privacy measures on user experience could include reduced functionality or personalization in AR apps, necessitating a balance between user security and application utility.
16. 📚 Summary of AR Security Research
16.1. Apple Vision Pro Security Insights
16.2. Microsoft HoloLens Security Insights
16.3. MetaQuest Pro Security Insights
16.4. General Findings and Recommendations
17. 🖼️ Study on UI Security in AR
- The study, a collaboration with the University of Washington, accepted for US Security 2024, investigates UI security properties in AR, aiming to set a foundation for systematic evaluation of AR platforms and SDKs, emphasizing UI-level security.
- AR platforms are rapidly growing, each with its own SDK for third-party developers to create immersive experiences, presenting unique security challenges.
- The study identifies key UI-related security and privacy issues, such as attackers inferring information about a user's surroundings and the ability to obscure real-world content.
- User interaction with AR involves perceiving the physical world, engaging with virtual content, and the potential security risks associated with these interactions.
- Recommendations include enhancing SDK security features and developing guidelines for developers to mitigate identified risks.
18. 🕵️♂️ Exploring UI Level Attacks in AR
- The AR threat model involves multiple principles, including third-party embedded code that attempts to interfere with AR content or interactions of others.
- Five UI level attacks were explored, with clickjacking used as a motivating example, illustrating how users can be misled into interacting with deceptive elements.
- In AR clickjacking, a malicious third-party app overlays a deceptive UI element (e.g., a blue box) over an ad (e.g., a red box) to capture user interactions intended for the ad.
- A proof of concept was tested on Apple's AR kit, demonstrating how user input on a benign-looking object (blue box) is redirected to a hidden malicious object (red box).
- This attack leverages the 'same space' property, where overlapping virtual objects compete for rendering priority and user input detection.
19. 🔍 Evaluating UI Security Properties
- A systematic evaluation was conducted on various AR platforms using metrics such as rendering order, interaction order, and consistency of rendering and interaction.
- The evaluation involved implementing test cases with native APIs on AR devices, using a state machine to coordinate event-driven test steps, and structuring the code for future extensions.
- Over 100 trials were conducted per property on given AR platforms, with each trial run five times and rerun at different spatial locations to account for nondeterminism.
- The experiment revealed inconsistencies on the Oculus platform, with results varying based on user spatial location, indicating potential new attack vectors in 3D environments.
- Two key metrics related to clickjacking attacks were identified: interaction consistency and rendering consistency. The attack was found possible on Google AR platforms.
20. 🛡️ Defense Strategies for AR UI Security
- All AR platforms, including Apple's AR Kit and Microsoft's HoloLens, are vulnerable to invisible virtual object attacks under different conditions.
- Invisible objects can hijack user input, akin to denial of service attacks, by placing them between users and their intended targets.
- Techniques for creating invisible objects include altering alpha values, disabling rendering, and using customized materials.
- Different platforms implement invisibility features differently, which can be exploited for denial of service attacks.
- By wrapping target objects with transparent layers, user interactions can be blocked, preventing selection or input.
- Invisible objects can also be used for creating fake ads, similar to those demonstrated in proof of concept attacks.
21. 🔍 Open Challenges in AR Security
21.1. Input Provenance and Synthetic Input Vulnerabilities
21.2. Defensive Strategies and Design Considerations
21.3. Framework Application and Future Research
22. 🤝 Community Efforts in AR Security
- Designing contextual indicators for bystanders to opt out of being recorded by AR glasses is a key challenge in AR.
- Safety-aware AR content placement is essential while users interact with the real world.
- There is a focus on deploying privacy-aware AR experiences within power-constrained settings.
- Collective community efforts are necessary to tackle these challenges effectively.
- A workshop at the ISMAR AR conference in Belleview highlighted AR design with a focus on security, privacy, and safety.
- The workshop had over 30 participants from 16 different institutions, demonstrating broad interest and commitment.
- Key outcomes included discussions on best practices for privacy indicators and innovative approaches to safety-aware content placement.
23. 🎤 Q&A and Closing Remarks
- Current bystander opt-out methods often involve LED light indicators on devices like Alexa or Google Home, but there's a lack of universal understanding of these indicators. Different colors and states can lead to confusion about whether recording is happening.
- A proposed solution involves giving bystanders control, such as using a hand gesture to disable recording when an AR device is aimed at them.
- Exploration of location-based privacy policies could automatically opt out bystanders in certain areas, although this is still under research.
- Physical measures, like wearing adversarial clothing designed to obscure facial recognition, are similar to actions bystanders might take to protect privacy.
- Research published at KAI indicates that constant recording by wearable technology remains socially unacceptable, leading to discomfort, particularly when minors are present.
- Post-processing techniques, such as automatically blurring faces in recordings, are being explored to enhance privacy.
- There is a vast design space for developing effective bystander opt-out strategies.