On the use of multimodal cues for the prediction of involvement in spontaneous conversation
Citation:
Catharine Oertel, Stefan Scherer, Nick Campbell, On the use of multimodal cues for the prediction of involvement in spontaneous conversation, INTERSPEECH-2011, Interspeech 2011, Florence, Italy, 28-31 August 2011, 2011, 1541-1544Download Item:

Abstract:
Quantifying the degree of involvement of a group of participants in a conversation is a task which humans accomplish every day, but it is something that, as of yet, machines are unable to do. In this study we first investigate the correlation between visual cues (gaze and blinking rate) and involvement. We then test the suitability of prosodic cues (acoustic model) as well as gaze and blinking (visual model) for the prediction of the degree of involvement by using a support vector machine (SVM). We also test whether the fusion of the acoustic and the visual model im- proves the prediction. We show that we are able to predict three classes of involvement with an reduction of error rate of 0.30 (accuracy =0.68).
Sponsor
Grant Number
Irish Research Council for Science Engineering and Technology
09/IN.1/I2631
Science Foundation Ireland
Author's Homepage:
http://people.tcd.ie/oertelgchttp://people.tcd.ie/oertelgc
Description:
PUBLISHEDFlorence, Italy
Author: OERTEL GEN BIERBACH, CATHARINE
Other Titles:
INTERSPEECH-2011Interspeech 2011
Type of material:
Conference PaperAvailability:
Full text availableKeywords:
multimodality, conversational involvementLicences: