Skip to main content

Video Not Available

    Details
    Author(s)
    Display Name
    Xin Yao
    Affiliation
    Affiliation
    Central South University
    Display Name
    Senquan An
    Affiliation
    Affiliation
    Central South University
    Abstract

    The widespread use of voice assistants has generated a vast amount of user voice data, promoting technologies such as speech recognition but also bringing privacy and security risks. Voice data contains the identity information of the speaker, and a little voice data is enough for a linking attack or other malicious attacks. We use differential privacy to formally define user privacy in speech publishing and propose a differential privacy-compliant algorithm to change the user’s x-vector. The user can customize our scheme to balance privacy and voice authenticity, which is significant for exploring user data privacy protection. We also evaluate our scheme on real-world datasets using a variety of existing speaker anonymization evaluation metrics. The results show that our scheme is universal and can effectively balance the privacy and authenticity of anonymized speech under different evaluation metrics.