Attention-based audio-visual fusion for robust automatic speech recognition
Citation:
Sterpu, G., Saam, C., Harte, N. Attention-based audio-visual fusion for robust automatic speech recognition, ICMI’18, October 16-20, 2018, Boulder, CO, USADownload Item:
Abstract:
Automatic speech recognition can potentially benefit from the lip motion patterns, complementing acoustic speech to improve the overall recognition performance, particularly in noise. In this paper we propose an audio-visual fusion strategy that goes beyond simple feature concatenation and learns to automatically align the two modalities, leading to enhanced representations which increase the recognition accuracy in both clean and noisy conditions. We test our strategy on the TCD-TIMIT and LRS2 datasets, designed for large vocabulary continuous speech recognition, applying three types of noise at different power ratios. We also exploit state of the art Sequence-to-Sequence architectures, showing that our method can be easily integrated. Results show relative improvements from 7% up to 30% on TCD-TIMIT over the acoustic modality alone, depending on the acoustic noise level. We anticipate that the fusion strategy can easily generalise to many other multimodal tasks which involve correlated modalities.
Sponsor
Grant Number
Science Foundation Ireland (SFI)
13/RC/2106
Author's Homepage:
http://people.tcd.ie/nharte
Author: Harte, Naomi
Other Titles:
ICMI '18 Proceedings of the 20th ACM International Conference on Multimodal Interaction20th ACM International Conference on Multimodal Interaction
Type of material:
Conference PaperAvailability:
Full text availableKeywords:
Automatic speech recognition, Lipreading, Audio-Visual Speech Recognition, Multimodal Fusion, Multimodal InterfacesDOI:
http://dx.doi.org/10.1145/3242969.3243014Metadata
Show full item recordLicences: