Abstract: We introduce Wav2Seq, the first self-supervised approach to pre-train both parts of encoder-decoder models for speech data. We induce a pseudo language as a compact discrete representation, ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results