Top Page
Masashi UNOKI and Masato AKAGI,
"Segregation of vowel in background noise using the model of segregating two acoustic sources based on auditory scene analysis,"
Proc. of EuroSpeech'99, vol. 6, pp. 2575-2578, Budapest, Hungary, Sept. 1999.
Last modified:
2 June 2001
Latex2html

EuroSp99.html

Abstract

This paper proposes an auditory sound segregation model based on auditory scene analysis. It solves the problem of segregating two acoustic sources by using constraints related to the heuristic regularities proposed by Bregman and by making an improvement to our previously proposed model. The improvement is to reconsider constraints on the continuity of instantaneous phases as well as constraints on the continuity of instantaneous amplitudes and fundamental frequencies in order to segregate the desired signal from a noisy signal precisely even in waveforms. Simulations performed to segregate a real vowel from a noisy vowel and to compare the results of using all or only some constraints showed that our improved model can segregate real speech precisely even in waveforms using all the constraints related to the four regularities, and that the absence of some constraints reduces the segregation accuracy.

Download
  • Electrical paper
    [gziped ps-file] (136 kB)
    [pdf-file] (1.2 MB)
  • Created by M. Unoki, 6 Nov. 2000