Project Details
[Return to Previous Page]Beyond Eye-Gaze: Natural Speech Interfaces for Assistive Communication
Company: Gabe Brown Family
Major(s):
Primary: IE
Secondary: BME
Optional: CMPEN, CMPSC
Non-Disclosure Agreement: NO
Intellectual Property: NO
This project gives you the chance to help restore more natural speech for Gabe, a Penn State College of Engineering alumnus who now relies on an eye gaze device after a traumatic brain injury. Instead of typing with his eyes, your team will design and prototype new ways for him to speak using the remaining jaw and lip movements connected to a personalized AI voice. You will build a jaw sensing system using small inertial sensors placed along the jawline, develop a camera-based lip reading pipeline using computer vision, and explore sensor fusion strategies that combine both signals to improve accuracy and reduce latency. On the software side, you will train and tune machine learning models for real-time inference and integrate the output with an AI voice service so the synthesized speech sounds like Gabe rather than a generic device. Expected deliverables include at least one working real-time prototype running on a laptop or embedded platform, clear system diagrams and code documentation, and basic testing data on speed, accuracy, and usability. Along the way, you will gain experience in embedded sensing, signal processing, computer vision, applied machine learning, and human-centered design while building something that can directly improve how a fellow engineer communicates with his family and caregivers.

