Any-to-Many Voice Conversion with Location-Relative Sequence-to-Sequence Modeling

Songxiang Liu, Yuewen Cao, Disong Wang, Xixin Wu, Xunying Liu and Helen Meng
Human-Computer Communications Laboratory (HCCL), The Chinese University of Hong Kong

Abstract

This paper proposes an any-to-many location-relative, sequence-to-sequence (seq2seq) based, non-parallel voice conversion approach. In this approach, we combine a bottle-neck feature extractor (BNE) with a seq2seq based synthesis module. During the training stage, an encoder-decoder based hybrid connectionist-temporal-classification-attention (CTC-attention) phoneme recognizer is trained, whose encoder has a bottle-neck layer. A BNE is obtained from the phoneme recognizer and is utilized to extract speaker-independent, dense and rich linguistic representations from spectral features. Then a multi-speaker location-relative attention based seq2seq synthesis model is trained to reconstruct spectral features from the bottle-neck features, conditioning on speaker representations for speaker identity control in the generated speech. To mitigate the difficulties of using seq2seq based models to align long sequences, we down-sample the input spectral feature along the temporal dimension and equip the synthesis model with a discretized mixture of logistic (MoL) attention mechanism. Since the phoneme recognizer is trained with large speech recognition data corpus, the proposed approach can conduct any-to-many voice conversion. Objective and subjective evaluations shows that the proposed any-to-many approach has superior voice conversion performance in terms of both naturalness and speaker similarity. Ablation studies are conducted to confirm the effectiveness of feature selection and model design strategies in the proposed approach. The proposed VC approach can readily be extended to support any-to-any VC (also known as one/few-shot VC), and achieve high performance according to objective and subjective evaluations.

Paper link: arXiv
Source Codes

Any-to-Many VC Demo (10 utterances randomly chosen from the test set)

Female (clb) to Male (bdl)

Source Target BNE-Seq2seqMoL (Proposed) Seq2seqPR-DurIAN (Proposed) PPG-VC NonParaSeq2seq-VC

Female (clb) to Female (slt)

Source Target BNE-Seq2seqMoL (Proposed) Seq2seqPR-DurIAN (Proposed) PPG-VC NonParaSeq2seq-VC

Male (rms) to Male (bdl)

Source Target BNE-Seq2seqMoL (Proposed) Seq2seqPR-DurIAN (Proposed) PPG-VC NonParaSeq2seq-VC

Male (rms) to Female (slt)

Source Target BNE-Seq2seqMoL (Proposed) Seq2seqPR-DurIAN (Proposed) PPG-VC NonParaSeq2seq-VC

Any-to-Any VC (also known as one-shot/few-shot VC) of the BNE-Seq2seqMoL approach Demo (10 utterances randomly chosen from the test set)

Female (clb) to Male (bdl)

Source Target BNE-Seq2seqMoL (Proposed)

Female (clb) to Female (slt)

Source Target BNE-Seq2seqMoL (Proposed)

Male (rms) to Male (bdl)

Source Target BNE-Seq2seqMoL (Proposed)

Male (rms) to Female (slt)

Source Target BNE-Seq2seqMoL (Proposed)

Previous Demo Pages

Voice Conversion Across Arbitrary Speakers Based on a Single Target-Speaker Utterance Demo page Paper

Jointly Trained Conversion Model and WaveNet Vocoder for Non-Parallel Voice Conversion Using Mel-Spectrograms and Phonetic Posteriorgrams Demo page Paper

End-to-End Accent Conversion Without Using Native Utterances Demo page Paper

Transferring Source Style in Non-Parallel Voice Conversion Demo Page Paper