Skip to main content
Video s3
    Details
    Poster
    Presenter(s)
    Yu-Wen Chen Headshot
    Display Name
    Yu-Wen Chen
    Affiliation
    Affiliation
    Academia Sinica
    Country
    Country
    Taiwan
    Abstract

    Synthesized speech from articulatory movements can have real-world use for patients with vocal cord disorders, situations requiring silent speech, or in high-noise environments. In this work, we present EMA2S, an end-to-end multimodal articulatory-to-speech system that directly converts articulatory movements to speech signals. We use a neural-network-based vocoder combined with multimodal joint-training, incorporating spectrogram, mel-spectrogram, and deep features. The experimental results confirm that the multimodal approach of EMA2S outperforms the baseline system in terms of both objective evaluation and subjective evaluation metrics. Moreover, results demonstrate that joint mel-spectrogram and deep feature loss training can effectively improve system performance.

    Slides
    • EMA2S: An End-to-End Multimodal Articulatory-to-Speech System (application/pdf)