Acoustic Features in Dialogue Dominate Accurate Personality Trait Classification
File Type:
PDFItem Type:
Conference PaperDate:
2020Access:
openAccessCitation:
Koutsombogera, M., Sarthy, P., Vogel, C., Acoustic Features in Dialogue Dominate Accurate Personality Trait Classification, 2020 IEEE International Conference on Human-Machine Systems (ICHMS), Rome, Italy, 7-9 September 2020Download Item:
KSV_ICHMS.pdf (Published (author's copy) - Peer Reviewed) 90.10Kb
Abstract:
We report on experiments in identifying personality traits from the dialogue of participants in the MULTISIMO corpus. Experiments used audio and linguistic features from participants’ speech and transcripts, using both self- and ob-server personality reports. Contrary to our expectations that the linguistic content would best predict traits, the results highlight the multimodal nature of personality computing, suggesting that the content is less important than acoustics: except for two cases, models based on acoustic features only, or combined with linguistic features, outperform models based on linguistic features alone; results also show that there is no optimal choice of a single model or feature set for the prediction of a trait across personality reports, as different models work best for different traits.
Sponsor
Grant Number
European Commission
701621
Author's Homepage:
http://people.tcd.ie/koutsommhttp://people.tcd.ie/vogel
Author: Koutsombogera, Maria; Vogel, Carl
Other Titles:
2020 IEEE International Conference on Human-Machine Systems (ICHMS)Type of material:
Conference PaperCollections:
Availability:
Full text availableSubject (TCD):
Data AnalysisDOI:
10.1109/ICHMS49158.2020.9209445Source URI:
http://multisimo.eu/datasets.htmlLicences: