Virtual Worlds Made Accessible Beyond Sound
Solving the design challenges of real-time Captioning & Sign Language in digital 360° environments.
Take a glimpse into the future of communicating beyond sound in virtual worlds. Open Signal New Media Fellow Myles de Bastion incorporates Sign Language and AI generated captioning as a non-verbal means to craft social VR experiences. Myles presented his work through a virtual presentation and plans to release the tools he has developed for others to build upon.
As our communities under lockdown turn toward technology to remotely work and socialize, we are relying more on video conferencing and virtual meetings to interact with one another in real-time. During this transition, accessibility is often an after-thought, especially to those who do not hear since the primary mode of communication is voice and auditory-based. Traditional applications such as Zoom, Google Meets are making headway into ensuring meetings are accessible to those who do not hear by providing live captioning solutions delivered via traditional 2D computer screens but for XR applications (an umbrella term encompassing augmented, virtual, and mixed reality technologies) frequently fail to provide similar accessibility solutions. 
This project aims to solve some of the expected and unexpected design challenges of presenting live captioning in 360-degree virtual reality environments. For example such solutions must be spatially aware of where the viewer is looking and where sound-sources are originating from. Designed by Myles de Bastion, a US-based Deaf visual sound artist with a UX design background and Genia Penksik, a UK-based software developer and engineer, the design and technical solutions will be shared publicly in hopes that more developers incorporate accessibility into their own applications. 
Back to Top