Microsoft Research: The presentation discusses HQC, a code-based cryptography scheme selected by NIST for standardization, focusing on its design, security, and practical applications.
Microsoft Research: Elizabeth Hermans presents her research on estimating mental workload using FNERS signals in a simulated flight task, highlighting its potential for adaptive pilot training.
Microsoft Research: The presentation by Ki Chang focuses on identifying and evaluating security, privacy, and safety threats in augmented reality (AR) and proposes solutions to enhance user protection.
Computerphile: Nvidia's CUDA revolutionized computing by transforming GPUs from graphics-focused to versatile computing tools, enabling efficient parallel processing for AI and supercomputing.
Microsoft Research - Hamming Quasi-Cyclic
Eduardo Persicherryi presents HQC, a code-based cryptography scheme recently selected by NIST for standardization. HQC stands for Hemi-Quasi-Cyclic, and it represents a shift from traditional code-based cryptography by making the decoder public and using a mask as the trapdoor. The scheme uses a combination of public and quasi-cyclic codes, where the public code is used for decoding errors and the quasi-cyclic code for generating ciphertexts. This separation allows for efficient error correction and secure encryption. HQC's security is based on a variant of the syndrome decoding problem, and it is designed to be resistant to known attacks, including those exploiting quasi-cyclic structures. The presentation also covers the parameterization of HQC, emphasizing its efficient decoding process and the rigorous analysis of its decoding failure rate (DFR). HQC's performance is competitive, with a focus on security, making it suitable for general-purpose cryptographic applications. The scheme's design ensures a balance between security and efficiency, with a strong foundation in well-studied cryptographic problems.
Key Points:
- HQC uses a public decoder and a mask as a trapdoor, separating error correction from encryption.
- The scheme is based on a variant of the syndrome decoding problem, ensuring strong security foundations.
- HQC's parameterization allows for efficient decoding and a negligible decoding failure rate.
- The scheme is designed to be resistant to attacks exploiting quasi-cyclic structures.
- HQC offers a balance between security and efficiency, suitable for general-purpose cryptographic applications.
Details:
1. 📢 Introduction and Opening Remarks
- The talk series is being recorded and will be published on YouTube as per the speaker's request. Participants can ask sensitive questions off the record at the end, ensuring privacy when needed.
- The cryptography talk series is announced through the 'crypto talk' distribution group, allowing interested individuals to stay informed about future sessions and discussions. This enables broader participation and engagement with the cryptography community.
2. 👨🏫 Speaker Introduction: Eduardo Persicherryi and HQC Overview
2.1. Introduction to Eduardo Persicherryi
2.2. Overview of HQC and Its Significance
3. 🔍 Background: Coding Theory Fundamentals
- HQC (hemiquasyclic codes) is highlighted by the speaker as a crucial development in coding theory, recently recognized by NIST in their fourth round.
- The presentation aims to delve into mathematical concepts pertinent to HQC, its design, security measures, and performance evaluation.
- Audience interaction is encouraged, indicating a focus on collaborative understanding and exploration.
- HQC's recent announcement by NIST underscores its growing importance and relevance in the field, marking it as a subject of interest for researchers and practitioners.
4. 📘 Linear Codes and Decoding Challenges
- Linear codes are vector spaces within a finite field characterized by their length (n) and dimension (k). The code rate (r), typically around 1/2, indicates the code's effectiveness and varies with application needs.
- The Hamming metric is used to evaluate linear codes, measuring the number of non-zero positions to define distance and weight.
- The minimum distance, defined by the smallest Hamming distance between code words, is crucial for error detection and correction.
- Decoding challenges often involve finding efficient algorithms to correct errors within the constraints of the code's minimum distance.
5. 🔁 Quasi-Cyclic Codes and Polynomial Structures
- Linear codes, classified as NKD codes, are fundamental in encoding and decoding, representing codes as vector spaces with bases in matrix form, typically of dimensions k by n, where k is the dimension and n is the length.
- Generator matrices serve as a basis for these vector spaces, facilitating the creation of code words through linear combinations of basis vectors that encode messages.
- Despite the uniqueness of code words, bases are not unique and can be transformed through changes of basis using square invertible matrices, often achieving an identity matrix configuration on the left.
- Quasi-cyclic codes, a specialized form of linear codes, leverage these polynomial structures for efficient encoding, balancing between error correction and computational complexity.
- Practical applications include enhancing data transmission reliability, where polynomial structures ensure robust error detection and correction capabilities.
- For example, in communications, quasi-cyclic codes are employed to optimize data integrity while maintaining manageable computational demands, improving overall system performance.
6. 🧩 HQC Keys: Public and Private Structures
- The parity check matrix is a fundamental component in defining codes, serving as an alternative to the generator matrix. It is specifically utilized in the classification of codes through vectors that satisfy the equation H * C = Z.
- With dimensions of (n-k) by n, the parity check matrix allows for the determination of the syndrome, H * C, which indicates the validity of a code. A zero syndrome confirms a valid code word, while a non-zero syndrome signals an invalid code word.
- In practical application, the parity check matrix is essential for error detection and correction within coding theory, providing a systematic method for identifying errors in data transmission.
7. 📤 HQC Encryption and Decryption Explained
- Linear codes are vector spaces composed of code words, each representing a specific linear combination tied to a message.
- These codes are designed to encode information across noisy channels, aiming to correct a certain number of errors.
- Double error-correcting codes use algorithms to rectify up to a specified number of errors, enhancing reliability.
- The effectiveness of error correction varies based on the type of code and its theoretical foundations.
- Without knowing the specific type of code, one works with random codes by using linearly dependent vectors.
- Algorithms such as the Berlekamp-Massey or other decoding algorithms are often used to identify and correct errors in these codes.
8. 🛡️ Security Analysis: Attacks and Defenses
8.1. General Decoding Problem (GDP)
8.2. Syndrome Decoding Problem (SDP)
9. 🔐 Decoding Strategies and Security Metrics
9.1. Error Correction Threshold
9.2. Information Set Decoding
9.3. HQC and Code-Based Cryptography
10. 🛠️ Code Optimization and Parameter Selection
- HQC eliminates the distinction between private and public codes, utilizing a public decoder to enhance error removal efficiency and streamline the cryptographic process.
- The trap door mechanism is simplified to a mask on the plaintext, improving efficiency by clearly separating the encryption and decryption tasks.
- Different codes are strategically used for creating ciphertext and error correction, optimizing the cryptographic process and improving overall system performance.
11. 📏 Performance Metrics and Parameterization
- Cyclic codes, which are invariant under cyclic shifts, allow all shifts of a code word to be part of the code, enhancing error correction capabilities.
- Quasi-cyclic codes improve efficiency and compactness, leading to more efficient cipher text generation, which is particularly useful in cryptographic applications.
- Circular matrices, which represent cyclic codes, contain all rotations of the first row, demonstrating the structural properties of these codes.
- These matrices are isomorphic to polynomial rings, specifically fq of x quotient with x to the n minus one, representing polynomials of degree n.
- The use of cyclic and quasi-cyclic codes is crucial in optimizing performance in data transmission and encryption, as they allow for more compact and efficient data representation.
12. 💡 HQC's Future and Standardization Prospects
- Cyclic codes allow for any cyclic shift of a code word, making them robust and versatile for error correction.
- Quasicyclic codes extend this concept, supporting shifts up to a defined power R, thus forming block circulant matrices useful in complex coding systems.
- The focus is on binary polynomials, which simplifies calculations using binary arithmetic, crucial for efficient encoding and decoding.
- Two-block quasicyclic codes are particularly significant, generating parity check matrices with two polynomials, h0 and h1, enhancing error detection capabilities.
13. 🤔 Technical Q&A: Implementation Details
13.1. Systematic Form and Parity Check Matrix
13.2. Generating Polynomials and Quasi-Cyclic Code
13.3. Public and Private Keys in HQC
13.4. Encryption Process
13.5. Polynomial Representation
13.6. Block Form and Polynomial Ring Operations
14. ❓ Comparative Q&A: HQC vs Other Cryptosystems
- The HQC decryption process utilizes simple polynomial arithmetic, improving efficiency.
- Decryption includes computing v minus u times y, part of the private key, to isolate the code word plus an error term.
- Error terms are sparse and designed with weights within a threshold to ensure successful decoding.
- The error term's boundary is critical; it must remain below the decoding radius to prevent decryption failure.
- Without the private key, error correction exceeds the decoding radius, ensuring security against unauthorized decryption.
- The public key s is a high-weight polynomial, which masks the code word and requires successful decoding to remove.
15. 📊 HQC's Competitive Edge and Security Aspects
15.1. Error Distribution in HQC Encoding
15.2. Security Measures and Reliability
16. 🔄 Final Discussions and NIST's Feedback
- The security of the NCPA encryption scheme relies on a variant of the syndrome decoding problem, ensuring resilience against distinguishing attacks, which maintains the core challenge of the problem even in a quasyclic code setup.
- The derivation of a Key Encapsulation Mechanism (KEM) from the encryption scheme follows standard practice among NIST candidates and uses the Fujisaki-Okamoto transformation for a tighter reduction, emphasizing the importance of starting from an NCPA secure scheme.
- The HHK framework of the Fujisaki-Okamoto transformation is noted for its efficiency in provable security, providing a robust method for achieving strong security guarantees.
- Feedback from NIST highlighted the importance of these transformations and the choice of starting points in maintaining security effectiveness, offering insights into aligning with best practices in cryptographic standards.
17. 👥 Audience Q&A and Closing Remarks
17.1. Modular Formulation and Security Considerations
17.2. Decoding Complexity and Algorithmic Insights
17.3. Parameter Selection and Code Construction
17.4. Implementation and Performance
17.5. Q&A and Comparative Analysis
Microsoft Research - Shining light on the learning brain: Estimating mental workload in a simulated flight task using opt
Elizabeth Hermans, a PhD student, discusses her research on using FNERS (functional near-infrared spectroscopy) to estimate mental workload in a simulated flight task. The study aims to optimize pilot training by adjusting task difficulty based on cognitive workload, thus enhancing learning efficiency. FNERS measures brain blood oxygenation, which correlates with neuronal activity, providing insights into cognitive demands. The research involved collecting data from 13 subjects over multiple sessions, using FNERS, ECG, and breathing signals. Hermans developed a data processing pipeline to filter and segment signals, extracting features to estimate workload through regression models. The study found that FNERS, particularly from frontal brain regions, is effective for workload estimation, outperforming traditional EEG methods. Breathing signals also proved crucial, while heart rate was less informative. The research suggests FNERS could be integrated into real-time adaptive training systems, potentially improving pilot training efficiency and reducing costs.
Key Points:
- FNERS measures brain blood oxygenation, correlating with cognitive workload.
- Adaptive training systems can use FNERS data to optimize task difficulty.
- Frontal brain regions provide the most informative FNERS signals for workload estimation.
- Breathing signals are crucial for accurate workload estimation; heart rate is less so.
- FNERS outperforms EEG in robustness for workload estimation, suggesting potential for real-time applications.
Details:
1. 🎤 Welcome and Overview of Presentation
- Elizabeth Hermans, a PhD student at Ku Leuven, presented her research findings from a three-month internship in the audio and acoustics research group.
- The presentation is titled 'Shining Light on the Learning Brain: Estimating Mental Workload in a Simulated Flight Task Using Optical fNIRS Signals'.
- The research focuses on measuring mental workload using optical functional near-infrared spectroscopy (fNIRS) in a flight simulation task.
- The significance of this research lies in its potential applications in enhancing pilot training and performance by accurately estimating mental workload levels.
- The objective is to improve understanding of cognitive load management in high-stakes environments.
- This study could lead to advancements in adaptive training systems that respond to a pilot's mental workload in real-time.
2. 🧠 Decoding Workload Estimation with FNERS
- Workload estimation is crucial for adaptive training as it measures task demands on individuals, ensuring tasks are challenging enough to optimize learning without leading to disengagement.
- In pilot training, particularly with VR simulators, optimizing learning speed is essential to reducing training costs and improving training efficiency.
- Signals from pilots in VR simulators are captured to estimate workload, providing a cognitive score that informs training adjustments.
- FNERS plays a vital role in capturing and analyzing these signals to provide real-time feedback on cognitive workload, ensuring that training remains effective and efficient.
3. ✈️ FNERS in Pilot Training: A Deep Dive
- Adaptive training systems leverage FNERS data to tailor pilot training tasks based on brain activity, optimizing difficulty and enhancing learning rates.
- VR simulators utilize FNERS by adjusting environmental variables like mist or wind to challenge pilots, guided by real-time brain activity data.
- Experiments show FNERS measures brain blood oxygenation, providing insights into task difficulty and pilot workload by indicating increased cerebral activity during challenging tasks.
- Increased cerebral blood oxygenation detected by FNERS indicates higher brain activity, helping adjust training tasks to match pilot capabilities more accurately.
4. 🔬 FNERS Signal Processing Explained
- FNERS employs optical methods using sources and detectors to measure neuronal activity through two wavelengths of light absorbed by oxygenated and deoxygenated hemoglobin.
- The absorption estimation of these wavelengths is achieved by analyzing scattered light captured by detectors, providing insights into cerebral blood oxygenation.
- Signals composed of low-frequency components and high-frequency cardiac noise offer data on respiration and cardiac patterns.
- Data is gathered from multiple channels using different optodes and detectors, revealing distinct oxygenated (red) and deoxygenated (blue) hemoglobin signals alongside heart rate patterns.
- A comprehensive study involving 13 subjects measured data 22 times daily over five days, aiming to estimate workload through FNERS data analysis.
5. 📊 Data Processing and Filtering Techniques
- Physiological signals, including FNERS, ECG, and breathing, are collected and filtered for feature extraction, segmenting them into smaller pieces for improved analysis.
- A regression approach estimates a workload score between 0 and 100, indicating task difficulty and individual workload levels.
- The adaptive training system (ATS) provides the ground truth for these estimates, improving the accuracy of workload predictions.
- Correlation between predicted workload and ATS score functions as the evaluation metric, ensuring precise assessment of model performance.
- Research questions investigate the added value of ECG and breathing data when FNERS data is available, and the effectiveness of signal segmentation versus full recordings.
- Feature selection and filtering physiological noise from FNERS signals are emphasized to enhance prediction quality.
- Data collection occurs in realistic VR settings, with subjects in VR chairs, ensuring the findings are applicable to real-world scenarios.
6. 🔍 Preprocessing and Feature Extraction for FNERS
- Implemented data quality metrics per 10-second segments to estimate data quality and discard bad data, ensuring higher accuracy in FNERS readings.
- Correlation analysis with ECG and heart rate power identifies and discards FNERS signals that are poorly correlated, enhancing data reliability.
- Signals with high correlation to heart rate are sometimes noise, necessitating their removal to maintain data integrity.
- Initial retention strategy required all channels valid simultaneously, retaining only 20% of recordings, indicating a need for improvement in data utilization.
- Revised retention strategy permits retention with just one valid channel per group (frontal, central, occipital, left, right), significantly increasing data retention rates.
- The new strategy achieves 85% data retention, providing ample data for feature extraction, compared to the initial 20%.
7. 🧪 Advanced Techniques for Feature Extraction
- ECG signals were filtered between 0.5 and 40 Hz to clearly isolate heartbeats, while respiration signals were filtered between 0.1 and 0.5 Hz due to their low frequency nature.
- Feature extraction for ECG and respiration included heart rate and heart rate variability, as well as respiration rate and respiration variability, which are indicators of stress or workload.
- For FNERS (functional near-infrared spectroscopy), a variety of features were extracted, focusing mainly on low-frequency signals and statistical features from the time domain.
- Correlations and time lags between FNERS signals and other physiological signals like ECG and respiration were analyzed to understand interactions, including heart rate power in FNERS.
- Correlations between oxygenated and deoxygenated FNERS signals were quantified to provide additional insights.
8. 📈 Grouping Channels and Regression Techniques
- Heart rate segmentation is used to synchronize HBO signals, enabling the averaging of FNER signals per heartbeat for clearer data.
- The FNERS, being sampled at a lower frequency than ECG, requires careful segmentation and averaging to ensure reliable readings, typically producing 4-5 data points per heartbeat.
- Overlaying heartbeat segments results in a smooth FNER signal, revealing oxygenation spikes and corresponding deoxygenated hemoglobin patterns.
- Key features analyzed include peak-to-peak amplitude and signal slopes, which are modeled using both linear and exponential functions to capture variations.
- Analyzing the delay between HBO and HBR signals provides insights into the timing of oxygenation changes in relation to the heartbeat, potentially indicating cardiovascular health or workload.
- Hypothesized delays, such as the time for blood to reach the brain post-heartbeat, are also examined for their correlation with health metrics.
- The standard deviation of FNER signals between heartbeats is measured to assess signal variation.
- Averaging features across groups of channels enhances data quality and reduces variability, improving the reliability of the analysis.
9. 🤖 Machine Learning Models and Results
9.1. Model Descriptions and Ablation Studies
9.2. Results and Fusion Strategies
10. 🔍 Evaluating FNERS and Future Directions
10.1. Current Insights on FNERS
10.2. Future Directions for FNERS
11. 💬 Interactive Q&A and Closing Remarks
11.1. FNERS vs EEG Insights
11.2. Heart Rate Impact and Estimation
11.3. Signal Smoothing and Heartbeat Estimation
Microsoft Research - Towards Safer Augmented Reality: Identifying, Evaluating, and Mitigating Security & Privacy Threats
Ki Chang, a PhD candidate from the University of Washington, discusses his research on the security and privacy of augmented reality (AR) systems. He highlights the rapid adoption of AR technologies, particularly glass frame devices, and their advanced features like live translation and scene understanding. However, these advancements also introduce new security and privacy risks, such as cognitive attacks and privacy breaches through facial recognition. Chang's research aims to develop a comprehensive protection framework to address these evolving threats.
He presents findings from his studies on eye tracking and hand tracking permissions in AR devices, revealing differences in privacy practices among major platforms like Oculus, Microsoft HoloLens, and Apple Vision Pro. His research shows that while some platforms offer more privacy-preserving mechanisms, users often lack understanding of these protections. Chang suggests improvements such as opt-in features for data sharing and clearer communication of privacy measures. Additionally, he explores UI security in AR, identifying vulnerabilities like clickjacking and synthetic input attacks, and recommends strategies to mitigate these risks. His work emphasizes the need for ongoing research and collaboration to address the complex challenges in AR security and privacy.
Key Points:
- AR technologies offer advanced features but pose new security and privacy risks.
- Eye and hand tracking in AR devices have varying privacy practices across platforms.
- Users often misunderstand privacy protections, highlighting the need for clearer communication.
- Vulnerabilities like clickjacking and synthetic input attacks exist in AR interfaces.
- Ongoing research and collaboration are crucial to address AR security and privacy challenges.
Details:
1. 🎓 Introducing Ki Chang and His AR Work
- Ki Chang is a PhD student at the University of Washington, focusing on the intersection of augmented reality (AR) and safety.
- His research specifically addresses security, privacy, and safety concerns in AR technologies, highlighting potential risks such as data breaches and user privacy violations.
- He is working on developing comprehensive strategies to mitigate these risks, aiming to enhance the overall safety and reliability of AR systems.
- Ki Chang's work represents a significant academic commitment to ensuring safe and secure use of emerging AR technologies, potentially influencing future industry standards.
2. 🌐 Joining Meta and Future Work
- An expert in augmented reality safety and security is joining the Meta PyTorch Edge team.
- The new role will focus on applying expertise to Meta's Rayban AI and smart wearable devices.
- The expert will enhance security protocols and integrate advanced AI features into Meta's wearable technology.
- This role is strategically significant for Meta's expansion in the smart wearables market, aiming to improve device functionality and user safety.
- The transition represents a commitment to bolstering Meta's technological capabilities and market position through innovative applications.
3. 🔍 Research Focus on AR Security and Privacy
- Kaiming is a final year PhD candidate at the University of Washington, advised by Francisco Rosner and Tadoshi Kono.
- His research is focused on the security and privacy of augmented reality (AR).
- The presentation is taking place at Microsoft, indicating potential industry collaboration or interest.
- Specific research projects include developing secure AR frameworks to protect user data and privacy.
- Investigating the impact of AR on user perception and potential security vulnerabilities.
- Exploring collaborations with tech companies to implement research findings in real-world applications.
4. 📈 Evolution and Benefits of AR Technology
4.1. Historical Context and Recent Advancements in AR Technology
4.2. Impact of AR Technology on Industries
5. 🚨 Security and Privacy Risks in AR
5.1. Introduction to AR Security Risks
5.2. Research and Cognitive Attacks
6. 🔑 Understanding AR Threats and Permissions
- Two college students successfully demonstrated how off-the-shelf AR glasses could be combined with facial recognition technology to identify individuals in public, highlighting significant privacy issues.
- The demonstration underscores the urgent need for robust security measures and frameworks to address both present and emerging threats in AR technology.
- Current industry mitigations include data encryption and restricted access policies, but these may not be sufficient to counteract the full spectrum of AR-related risks.
- There is a pressing need for the industry to anticipate future threats and develop comprehensive strategies that ensure user privacy and data protection as AR adoption grows.
7. 👁️🗨️ Eye and Hand Tracking in AR
7.1. Privacy and Security Implications of Eye and Hand Tracking
7.2. User Interaction and Vulnerabilities in AR
8. 🛡️ User Security and Privacy in AR
- Today's AR headsets are equipped with sophisticated sensors for hand tracking and eye tracking, introducing both exciting opportunities and new privacy threats.
- By combining eye tracking and hand tracking, users can navigate AR environments using eye movements and hand gestures, enhancing immersive experiences.
- Eye tracking data can improve avatar realism in virtual settings and optimize system functions like rendering and power consumption.
- Research highlights privacy concerns associated with eye tracking and hand tracking data, despite their potential benefits.
9. 📋 Methodology and Survey Findings
- AR devices capture data that can reveal sensitive user attributes, such as identity and interest levels.
- It is crucial to understand the permission design space with new sensing technologies as millions of users start using AR devices for the first time.
- Research questions focus on the current technical landscape of eye and hand-tracking permissions in AR platforms, user feelings about permission flows, comprehension of permission details, capabilities, privacy risks, and factors affecting AR technology adoption.
- The study finds that everything is processed on the device without using the cloud, with different platforms having varying processing methods.
- Methodology includes structured brainstorming to identify relevant properties of eye and hand-tracking, followed by evaluation of each property.
- The survey reveals that understanding and managing permissions is critical to user trust and adoption of AR technologies.
- Participants expressed concerns about privacy risks, emphasizing the need for transparent and user-friendly permission flows.
- Detailed insights into user comprehension and the impact of permission settings on privacy perceptions were gathered.
- The evaluation of eye and hand-tracking technologies highlights the need for standardized permission settings across platforms.
10. 🔍 Analyzing AR Platforms' Design Choices
10.1. Eyetracking Permissions and Privacy
10.2. Handtracking Permissions and Data Management
10.3. User Perception and Experience
11. 🤔 Privacy Concerns with AR Data Usage
- HoloLens and Vision Pro allow app access to hand tracking data without explicit user permission due to built-in hand tracking capabilities, unlike Oculus which requires additional permissions. This raises privacy concerns as users may not be aware of data collection.
- An API provides abstracted hand tracking data, representing hand joint movements in different axes, essential for app interaction. However, the abstraction may obscure the amount and type of data being collected.
- Apps can access hand tracking data in the background if user settings allow it, enabling functionalities like gesture recognition (e.g., pinch, point) and metrics such as hand direction, length, and velocity. This background access could lead to unauthorized data usage if not properly managed.
12. 📱 Survey on User Perception and Comprehension
12.1. Potential Privacy Risks
12.2. Survey Design and Findings
13. 📝 Recommendations for AR Platforms
13.1. User Preferences for AR Platforms
13.2. Data Privacy and User Comprehension
14. 🔒 Apple's Privacy Measures in AR
- Apple ensures that eye tracking data is not accessible to applications, developers, or even Apple itself, enhancing privacy by keeping data processing local to the device.
- Recommends implementing opt-in and opt-out features for user data sharing preferences to increase user control over personal data.
- Encourages Apple to clearly explain their privacy-preserving mechanisms to improve user understanding and comfort with the technology.
- Oculus and HoloLens protect privacy by abstracting eye-tracking data, but studies indicate even abstracted data can pose privacy risks, suggesting a need for increased privacy guarantees.
15. 🎭 Balancing Utility and Privacy in AR
- AR technology, while useful for applications like health assessments via eye-tracking, poses significant privacy risks by potentially exposing sensitive user data.
- There is a critical concern regarding the privacy of data collected through AR glasses, as even anonymized data can infer personal interests.
- To mitigate these risks, stronger privacy protections for eye-tracking and other sensitive data are recommended.
- Research suggests that even without direct data access, side-channel attacks can deduce user attention, underscoring persistent privacy challenges.
- Apple's strict privacy protocols can restrict developers' creative freedom, impacting the diversity of AR applications.
- The balance between privacy and utility is complex, requiring innovative permission models and threat analysis to address.
- Different AR platforms implement varied privacy measures, and ongoing studies are examining user responses to these approaches.
- Examples of privacy breaches could include unauthorized data access through AR applications, highlighting the need for robust security measures.
- The implications of privacy measures on user experience could include reduced functionality or personalization in AR apps, necessitating a balance between user security and application utility.
16. 📚 Summary of AR Security Research
16.1. Apple Vision Pro Security Insights
16.2. Microsoft HoloLens Security Insights
16.3. MetaQuest Pro Security Insights
16.4. General Findings and Recommendations
17. 🖼️ Study on UI Security in AR
- The study, a collaboration with the University of Washington, accepted for US Security 2024, investigates UI security properties in AR, aiming to set a foundation for systematic evaluation of AR platforms and SDKs, emphasizing UI-level security.
- AR platforms are rapidly growing, each with its own SDK for third-party developers to create immersive experiences, presenting unique security challenges.
- The study identifies key UI-related security and privacy issues, such as attackers inferring information about a user's surroundings and the ability to obscure real-world content.
- User interaction with AR involves perceiving the physical world, engaging with virtual content, and the potential security risks associated with these interactions.
- Recommendations include enhancing SDK security features and developing guidelines for developers to mitigate identified risks.
18. 🕵️♂️ Exploring UI Level Attacks in AR
- The AR threat model involves multiple principles, including third-party embedded code that attempts to interfere with AR content or interactions of others.
- Five UI level attacks were explored, with clickjacking used as a motivating example, illustrating how users can be misled into interacting with deceptive elements.
- In AR clickjacking, a malicious third-party app overlays a deceptive UI element (e.g., a blue box) over an ad (e.g., a red box) to capture user interactions intended for the ad.
- A proof of concept was tested on Apple's AR kit, demonstrating how user input on a benign-looking object (blue box) is redirected to a hidden malicious object (red box).
- This attack leverages the 'same space' property, where overlapping virtual objects compete for rendering priority and user input detection.
19. 🔍 Evaluating UI Security Properties
- A systematic evaluation was conducted on various AR platforms using metrics such as rendering order, interaction order, and consistency of rendering and interaction.
- The evaluation involved implementing test cases with native APIs on AR devices, using a state machine to coordinate event-driven test steps, and structuring the code for future extensions.
- Over 100 trials were conducted per property on given AR platforms, with each trial run five times and rerun at different spatial locations to account for nondeterminism.
- The experiment revealed inconsistencies on the Oculus platform, with results varying based on user spatial location, indicating potential new attack vectors in 3D environments.
- Two key metrics related to clickjacking attacks were identified: interaction consistency and rendering consistency. The attack was found possible on Google AR platforms.
20. 🛡️ Defense Strategies for AR UI Security
- All AR platforms, including Apple's AR Kit and Microsoft's HoloLens, are vulnerable to invisible virtual object attacks under different conditions.
- Invisible objects can hijack user input, akin to denial of service attacks, by placing them between users and their intended targets.
- Techniques for creating invisible objects include altering alpha values, disabling rendering, and using customized materials.
- Different platforms implement invisibility features differently, which can be exploited for denial of service attacks.
- By wrapping target objects with transparent layers, user interactions can be blocked, preventing selection or input.
- Invisible objects can also be used for creating fake ads, similar to those demonstrated in proof of concept attacks.
21. 🔍 Open Challenges in AR Security
21.1. Input Provenance and Synthetic Input Vulnerabilities
21.2. Defensive Strategies and Design Considerations
21.3. Framework Application and Future Research
22. 🤝 Community Efforts in AR Security
- Designing contextual indicators for bystanders to opt out of being recorded by AR glasses is a key challenge in AR.
- Safety-aware AR content placement is essential while users interact with the real world.
- There is a focus on deploying privacy-aware AR experiences within power-constrained settings.
- Collective community efforts are necessary to tackle these challenges effectively.
- A workshop at the ISMAR AR conference in Belleview highlighted AR design with a focus on security, privacy, and safety.
- The workshop had over 30 participants from 16 different institutions, demonstrating broad interest and commitment.
- Key outcomes included discussions on best practices for privacy indicators and innovative approaches to safety-aware content placement.
23. 🎤 Q&A and Closing Remarks
- Current bystander opt-out methods often involve LED light indicators on devices like Alexa or Google Home, but there's a lack of universal understanding of these indicators. Different colors and states can lead to confusion about whether recording is happening.
- A proposed solution involves giving bystanders control, such as using a hand gesture to disable recording when an AR device is aimed at them.
- Exploration of location-based privacy policies could automatically opt out bystanders in certain areas, although this is still under research.
- Physical measures, like wearing adversarial clothing designed to obscure facial recognition, are similar to actions bystanders might take to protect privacy.
- Research published at KAI indicates that constant recording by wearable technology remains socially unacceptable, leading to discomfort, particularly when minors are present.
- Post-processing techniques, such as automatically blurring faces in recordings, are being explored to enhance privacy.
- There is a vast design space for developing effective bystander opt-out strategies.
Computerphile - What is Cuda? - Computerphile
Nvidia's CUDA technology originated from the idea of using GPUs, initially designed for rendering graphics, for general-purpose computing. Ian Buck's PhD work led to the development of CUDA, which allows for heterogeneous computing by efficiently distributing tasks between CPUs and GPUs. This approach is particularly beneficial for tasks requiring parallel processing, such as image processing and AI computations. Over the years, CUDA has evolved from a simple language and compiler to a comprehensive suite of tools and libraries, supporting a wide range of applications from AI to supercomputing. The technology ensures backward compatibility, allowing older CUDA versions to run on new hardware, which is a testament to Nvidia's commitment to maintaining a stable and reliable platform. Additionally, CUDA supports confidential computing, providing secure, encrypted channels between CPUs and GPUs to protect sensitive data during processing.
Key Points:
- CUDA enables efficient parallel processing by leveraging GPUs for tasks like AI and image processing.
- Originally designed for graphics, GPUs now support a wide range of computing tasks due to CUDA's evolution.
- CUDA maintains backward compatibility, ensuring older versions run on new hardware.
- Confidential computing in CUDA provides secure, encrypted data channels between CPUs and GPUs.
- Nvidia's commitment to CUDA's development ensures a stable platform for diverse applications.
Details:
1. 🌟 The Birth of CUDA: From Graphics to Computing
- Nvidia initially focused on developing GPUs for rendering pixels, but saw an opportunity to extend their utility to computing applications.
- Ian Buck's proposal to utilize GPUs for fluid mechanics sparked the idea, leading to the development of CUDA.
- The introduction of CUDA marked a significant shift, transforming graphics processing units (GPUs) into tools for general-purpose computing, effectively handling parallel processing tasks.
- Early iterations of CUDA faced limitations, requiring extensive development to become fully programmable, which Nvidia overcame through innovation.
- CUDA facilitates heterogeneous computing by efficiently distributing parallel tasks to GPUs while assigning serial tasks to CPUs, optimizing performance across different computing environments.
- This transition enabled Nvidia to not only enhance their hardware capabilities but also position themselves as leaders in the field of parallel computing, impacting industries ranging from scientific research to artificial intelligence.
2. 🔄 Evolution of GPU Architecture
- Historically, GPUs were primarily fixed-function, with about 90% of the hardware dedicated to texture mappers and pixel shaders, and only 10% was programmable. This setup limited flexibility and adaptability in graphics rendering.
- The modern GPU architecture has reversed this structure, with 90% of the hardware now programmable and only 10% fixed-function, allowing for more advanced and flexible graphical processing capabilities.
- This shift has enabled the integration of complex procedural textures and advanced graphical features, significantly enhancing the visual fidelity of digital content.
- The methodologies used in GPU development are increasingly aligned with those in fluid mechanics and AI, suggesting a trend towards unified problem-solving strategies in computational fields.
3. 🔍 AI and Supercomputing: Shared Foundations
- AI and supercomputing share fundamental numerical algorithms, such as linear algebra and Fourier transforms, which are crucial for computational tasks in both fields.
- Supercomputing applications include weather simulation and quantum mechanics, utilizing diverse numerical algorithms for complex calculations.
- AI places a greater emphasis on performance tuning and optimization compared to supercomputing, due to the large scale and uniform nature of AI models that allow for targeted efficiency improvements.
- The varied tasks in supercomputing make it difficult to achieve peak performance across all applications, unlike the more uniform tasks in AI which can be optimized for maximum efficiency.
- For example, both AI and supercomputing use matrix multiplication extensively, but AI optimizes this process at scale to improve model training times.
- Supercomputing involves a wider range of application-specific algorithms, requiring a balance between general performance and specialized task efficiency.
- AI's optimization strategies often focus on reducing computational time and resource usage, evidenced by advancements in hardware accelerators such as GPUs and TPUs.
4. 💻 CUDA's Role in Modern Computing
- CUDA's underlying software stack is written in C, evolving from a simple language and compiler to a comprehensive suite managing GPU interactions.
- CUDA encompasses image processing, AI libraries, and compilers, facilitating diverse applications and interactions with GPUs.
- NVIDIA aims to simplify GPU programming by developing extensive code bases, allowing users to efficiently leverage CUDA with minimal coding effort.
- CUDA serves as an abstraction layer, enabling integration with languages like Python for GPU tasks.
- CUDA's architecture supports parallel computing, enhancing performance in tasks like deep learning and scientific simulations.
- Specific applications include accelerated image processing and AI model training, showcasing its versatility in modern tech solutions.
- The architecture allows for significant performance improvements, with metrics showing up to 10x faster processing in certain tasks.
5. 🔧 How CUDA Integrates with Hardware
- CUDA integrates the CPU and GPU, allowing programmers to treat them as a single unit for executing tasks.
- Developers can assign specific tasks to either the CPU or GPU, such as loading configuration files with the CPU and performing image processing on the GPU.
- CUDA does not automatically determine task allocation; developers must specify which hardware executes each instruction.
- This integration allows for efficient parallel processing by leveraging the strengths of both CPU and GPU within a single program environment.
6. 🚀 CUDA's Evolution and Backward Compatibility
- The CUDA ecosystem includes approximately 900 libraries and AI models, providing a comprehensive suite for AI, supercomputing, scientific computing, graphics, and data analysis.
- CUDA maintains backward compatibility, ensuring that programs written for CUDA 1.0 still run on the latest versions, including the upcoming CUDA 13, showcasing a 19 to 20 years commitment to compatibility.
- The backward compatibility is a result of a strategic decision by Nvidia's CEO, Jensen Wang, to ensure CUDA's presence in every chip and accommodate both hardware and software changes.
- Despite hardware evolutions, the consistent API structure allows legacy CUDA applications to operate on new GPU architectures, ensuring seamless transitions for developers.
7. 🔒 Security and Confidential Computing
- Security is a high-priority task, requiring significant effort akin to military standards, emphasizing doing it right or not at all.
- Confidential Computing establishes a fully secured encrypted channel between GPUs and CPUs, enhancing data protection over PCI buses.
- This technology supports fully encrypted zero trust networks, crucial for protecting AI model weights from theft, given the substantial financial investments in model training.
- Both CPUs and GPUs are advancing in hardware encryption capabilities, reflecting the industry's dedication to robust security.
- The CUDA ecosystem serves as a unified interface, integrating a wide range of software frameworks and applications, enabling seamless hardware interaction.
- CUDA functions as a runtime or interpreter, converting high-level commands to hardware-specific instructions, ensuring hardware compatibility.
- Originally designed for graphics, the hardware now supports AI matrix operations and complex computational tasks, showcasing the adaptability of existing pipelines.