Securing Augmented Reality Output
Kiron Lebeck (College of Washington)
Introduced on the
2017 IEEE Symposium on Safety & Privateness
Could 22–24, 2017
San Jose, CA
Augmented reality (AR) applied sciences, resembling Microsoft’s HoloLens head-mounted display and AR-enabled automotive windshields, are quickly rising. AR functions present users with immersive virtual experiences by capturing input from a user’s environment and overlaying virtual output on the user’s notion of the true world. These functions allow users to work together with and understand virtual content material in essentially new methods. Nevertheless, the immersive nature of AR functions raises severe safety and privateness considerations. Prior work has centered totally on input privateness dangers stemming from functions with unrestricted entry to sensor knowledge. Nevertheless, the dangers related to malicious or buggy AR output stay largely unexplored. For instance, an AR windshield utility may deliberately or by accident obscure oncoming autos or safety-critical output of different AR functions. On this work, we handle the basic problem of securing AR output within the face of malicious or buggy functions. We design, prototype, and consider Arya, an AR platform that controls utility output in response to insurance policies laid out in a constrained but expressive coverage framework. In doing so, we determine and overcome quite a few challenges in securing AR output.