VALL-E has developed a context-aware learning function that can be used to synthesize high-quality personalized speech by simply recording an invisible speaker for 3 seconds as a voice prompt. Experimental results show that VALL-E significantly outperforms state-of-the-art zero-shot TTS systems in terms of speech naturalness and speaker similarity. Furthermore, we found that VALL-E can preserve the speaker’s emotions and the acoustic environment of the acoustic prompts during synthesis.
Tech Used:
VALL