A comparison of EMG-to-Speech Conversion for Isolated and Continuous Speech
, ,
[pdf download] [isbn] [back to list] [halcy.de home]
Reference:
A comparison of EMG-to-Speech Conversion for Isolated and Continuous Speech (Lorenz Diener, Sebastian Bredehöft, Tanja Schultz), at 13th ITG Conference on Speech Communication, October 2018
Bibtex Entry:
@inproceedings{diener2018comparison,
  title        = {A comparison of EMG-to-Speech Conversion for Isolated and Continuous Speech},
  author       = {Lorenz Diener and Sebastian Bredehöft and Tanja Schultz},
  year         = 2018,
  month        = oct,
  booktitle    = {13th {ITG} Conference on Speech Communication},
  isbn         = {978-3-8007-4767-2},
  abstract     = {This paper presents initial results of performing EMG-to-Speech conversion within
    our new EMG-to-Speech corpus. This new corpus consists of parallel facial array sEMG and read
    audible speech signals recorded from multiple speakers. It contains different styles of
    utterances - continuous sentences, isolated words, and isolated consonant-vowel combinations -
    which allows us to evaluate the performance of EMG-to-Speech conversion when trying to convert
    these different styles of utterance as well as the effect of training systems on one style to
    convert another. We find that our system deals with isolated-word/consonant-vowel utterances
    better than with continuous speech. We also find that it is possible to use a model trained on
    one style to convert utterances from another - however, performance suffers compared to training
    within that style, especially when going from isolated to continuous speech.},
  url          = {https://halcy.de/cites/pdf/diener2018comparison.pdf},
}