Assistive Technology Enters the Classroom With Holographic Sign Language Interpreters

A team from Purdue University harmonizes innovation and technology to create a new learning system for deaf and hard-of-hearing (DHH) students. In “Holographic Sign Language Interpreters,” The signing avatars, observed through wearable mixed reality smart glasses (Microsoft HoloLens), translate speech to Signed Exact English (SEE) in real time. SIGGRAPH caught up with the team behind the assistive technology project to learn more about its development and how it contributes to the conversation around accessibility in the classroom.

SIGGRAPH: Share some background about your SIGGRAPH 2022 Educator’s Forum Talk “Holographic Sign Language Interpreters.” What inspired it?

Fu-Chia Yang (FY): Our research lab has been working on digital sign language avatars for years. In the past, we explored different wearable XR devices through building applications and incorporating sign language pedagogical agents for deaf/hard-of-hearing students. With the introduction of HoloLens 2 and the enhancement of developer APIs, we decided to bring “Holographic Sign Language Interpreters” into our research scope.

Christos Mousas (CM): We built a mixed reality (MR) application for DHH students. Our system detects audio input and displays the corresponding sign animation in near real time. The overall goal of our research is to improve DHH college students’ accessibility to educational materials and DHH children’s learning of math concepts through the application of holographic wearable MR.

SIGGRAPH: Tell us about the development of “Holographic Sign Language Interpreters.” Did you face any challenges in its creation?

FY: The most challenging part was coming up with the speech-to-sign integration for the prototype with a limited sign language dataset and testing out compatible APIs for the HoloLens device. Our team plans to work on enhancing this part of the application by implementing machine learning and natural language processing technique, which is the ultimate best approach for speech-to-sign integration.

CM: The implementation pipeline of our projects consists of two main steps. First, we collected four K–1 math lessons through a motion capture session. A professional deaf signer was recruited to sign in Signed Exact English (SEE) for K–1 math lectures. We utilized motion capture technology to capture SEE animation for the holographic avatar. Second, we designed the MR application in Unity game engine, utilizing the Azure Speech-to-Text SDK. The system takes input speech (audio) from the instructor and converts it to English text. The converted sentences are then analyzed to identify the corresponding signs, prosodic markers, and prosodic modifiers. The system then triggers the sign animation segments in the dataset and renders a lifelike holographic sign language interpreter who signs in SEE.

SIGGRAPH: How did you develop the avatars?

FY: The avatar was originally created by Salas (2020). We purchased the advanced skeleton rig and re-rigged it to fit into our motion capture pipeline. The avatar animation was then captured through a full body, face, and hand tracking system simultaneously by recording a professional signer.

SIGGRAPH: How does this system contribute to greater accessibility and adaptability in the classroom?

FY: Utilizing digital sign language avatars can benefit DHH students through several perspectives. Human interpreters can often be costly and not always available, especially when it comes to higher education, whereas digital sign language interpreters can provide higher accessibility to the end users. Moreover, course materials can be retrieved and reviewed post-classes and digital agents can be catered to users’ preferences, providing higher adaptability to the DHH community. Similar applications can be used outside of the classroom as well, including daily communication and entertainment activities.

SIGGRAPH: How do you hope this system will be used moving forward? What do you want users to take away from it?

FY: The purpose of creating this system is not to replace human interpreters but to provide more options and alternatives to DHH individuals. Multiple aspects of the system still need refinements and adjustments to better fit end users’ requirements and expectations. We hope to keep improving the system and be able to test it in real classroom scenarios.

CM: The holographic avatars can be used by deaf learners in the classroom, at home, and while interacting with digital educational materials. However, there are multiple other areas where holographic avatar interpreters could benefit DHH people, such as in the market or other public places. We want users to think that technological advances could ease and improve our daily lives.

SIGGRAPH: What can SIGGRAPH 2022 participants expect when they learn more about “Holographic Sign Language Interpreters” during the conference?

FY: When it comes to assistive technology, human factors and input are essential. We built this prototype utilizing several cutting-edge technologies, and the main goal was simply bringing out the best experience for DHH students in class.

CM: We are planning to present the developed pipeline of our application. We hope participants will benefit by understanding current technological limitations and potential future directions. After all, synthesizing holographic sign language interpreters is still a challenge that researchers from various fields should work together to achieve better results.

SIGGRAPH: SIGGRAPH is excited to host its first-ever hybrid conference. What are you most looking forward to about the experience?

FY: I look forward to meeting contributors online and streaming Talks from people attending the conference in person.

CM: We experienced hybrid conferences before (e.g., ACM CHI 2022). We know there are a lot of challenges when organizing such a hybrid event. However, we are really looking forward to the social events.

 

by SIGGRAPH Conferences

PHP Code Snippets Powered By : XYZScripts.com