May 9, 2025
AI headphones translate multiple speakers at once, cloning their voices in 3D sound
, a 糖心原创 doctoral student, recently toured a museum in Mexico. Chen doesn鈥檛 speak Spanish, so he ran a translation app on his phone and pointed the microphone at the tour guide. But even in a museum鈥檚 relative quiet, the surrounding noise was too much. The resulting text was useless.
Various technologies have emerged lately promising fluent translation, but none of these solved Chen鈥檚 problem of public spaces. , for instance, function only with an isolated speaker; they after the speaker finishes.
Now, Chen and a team of UW researchers have designed at once, while preserving the direction and qualities of people鈥檚 voices. The team built the system, called Spatial Speech Translation, with off-the-shelf noise-cancelling headphones fitted with microphones. The team鈥檚 algorithms separate out the different speakers in a space and follow them as they move, translate their speech and play it back with a 2-4 second delay.
The Apr. 30 at the ACM CHI Conference on Human Factors in Computing Systems in Yokohama, Japan. The code for the proof-of-concept device is available for others to build on. 鈥淥ther translation tech is built on the assumption that only one person is speaking,鈥 said senior author , a UW professor in the Paul G. Allen School of Computer Science & Engineering. 鈥淏ut in the real world, you can鈥檛 have just one robotic voice talking for multiple people in a room. For the first time, we鈥檝e preserved the sound of each person鈥檚 voice and the direction it鈥檚 coming from.鈥
Related:
- Story in
- For more information, visit聽
The system makes three innovations. First, when turned on, it immediately detects how many speakers are in an indoor or outdoor space.
鈥淥ur algorithms work a little like radar,鈥 said lead author Chen, a UW doctoral student in the Allen School. 鈥淪o it鈥檚 scanning the space in 360 degrees and constantly determining and updating whether there鈥檚 one person or six or seven.鈥
The system then translates the speech and maintains the expressive qualities and volume of each speaker鈥檚 voice while running on a device, such mobile devices with an Apple M2 chip like laptops and Apple Vision Pro. (The team avoided using cloud computing because of the privacy concerns with voice cloning.) Finally, when speakers move their heads, the system continues to track the direction and qualities of their voices as they change.
The system functioned when tested in 10 indoor and outdoor settings. And in a 29-participant test, the users preferred the system over models that didn鈥檛 track speakers through space.
In a separate user test, most participants preferred a delay of 3-4 seconds, since the system made more errors when translating with a delay of 1-2 seconds. The team is working to reduce the speed of translation in future iterations. The system currently only works on commonplace speech, not specialized language such as technical jargon. For this paper, the team worked with Spanish, German and French 鈥 but previous work on translation models has shown they can be trained to translate around 100 languages.
鈥淭his is a step toward breaking down the language barriers between cultures,鈥 Chen said. 鈥淪o if I鈥檓 walking down the street in Mexico, even though I don鈥檛 speak Spanish, I can translate all the people鈥檚 voices and know who said what.鈥
, a research intern at HydroX AI and a UW undergraduate in the Allen School while completing this research, and , a UW doctoral student in the Allen School, are also co-authors on this paper. This research was funded by a Moore Inventor Fellow award and a .
For more information, contact the researchers at babelfish@cs.washington.edu.听