skip to main content
10.1145/2993148.2993176acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
short-paper

Deep multimodal fusion for persuasiveness prediction

Published: 31 October 2016 Publication History

Abstract

Persuasiveness is a high-level personality trait that quantifies the influence a speaker has on the beliefs, attitudes, intentions, motivations, and behavior of the audience. With social multimedia becoming an important channel in propagating ideas and opinions, analyzing persuasiveness is very important. In this work, we use the publicly available Persuasive Opinion Multimedia (POM) dataset to study persuasion. One of the challenges associated with this problem is the limited amount of annotated data. To tackle this challenge, we present a deep multimodal fusion architecture which is able to leverage complementary information from individual modalities for predicting persuasiveness. Our methods show significant improvement in performance over previous approaches.

References

[1]
W. Apple, L. A. Streeter, and R. M. Krauss. Effects of pitch and speech rate on personal attributions. Journal of Personality and Social Psychology, 37(5):715, 1979.
[2]
B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on Computational learning theory, pages 144–152. ACM, 1992.
[3]
R. Boyatzis, R. E. Boyatzis, and F. Ratti. Emotional, social and cognitive intelligence competencies distinguishing effective italian managers and leaders in a private company and cooperatives. Journal of Management Development, 28(9):821–838, 2009.
[4]
P. Bri˜ nol and R. E. Petty. Overt head movements and persuasion: a self-validation analysis. Journal of personality and social psychology, 84(6):1123, 2003.
[5]
J. K. Burgoon and S. B. Jones. Toward a theory of personal space expectations and their violations. Human Communication Research, 2(2):131–146, 1976.
[6]
C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011.
[7]
Software available at http://www.csie.ntu.edu.tw/˜cjlin/libsvm.
[8]
M. Chatterjee, S. Park, L. P. Morency, and S. Scherer. Combining two perspectives on classifying multimodal data for recognizing speaker traits. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pages 7–14. ACM, 2015.
[9]
M. Chatterjee, S. Park, H. S. Shim, K. Sagae, and L. P. Morency. Verbal behaviors and persuasiveness in online multimedia content. SocialNLP 2014, page 50, 2014.
[10]
D. Chisholm, B. Siddiquie, A. Divakaran, and E. Shriberg. Audio-based affect detection in web videos. In Multimedia and Expo (ICME), 2015 IEEE International Conference on, pages 1–6. IEEE, 2015.
[11]
F. Chollet. Keras. https://github.com/fchollet/keras, 2015.
[12]
W. D. Crano and R. Prislin. Attitudes and persuasion. Annu. Rev. Psychol., 57:345–374, 2006.
[13]
A. Cullen, J. Kane, T. Drugman, and N. Harte. Creaky voice and the classification of affect. Proceedings of WASSS, Grenoble, France, 2013.
[14]
G. Degottex, J. Kane, T. Drugman, T. Raitio, and S. Scherer. COVAREPa collaborative voice analysis repository for speech technologies. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pages 960–964. IEEE, 2014.
[15]
U. Hess, R. B. Adams, and R. E. Kleck. The face is not an empty canvas: how facial expressions interact with facial appearance. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 364(1535):3497–3504, 2009.
[16]
J. Hyde, E. J. Carter, S. Kiesler, and J. K. Hodgins. Using an interactive avatar’s facial expressiveness to increase persuasiveness and socialness. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pages 1719–1728. ACM, 2015.
[17]
U. R. Karmarkar and Z. L. Tormala. Believe me, i have no idea what i’m talking about: The effects of source certainty on consumer involvement and persuasion. Journal of Consumer Research, 36(6):1033–1049, 2010.
[18]
D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[19]
H. Liu, J. Li, and L. Wong. A comparative study on feature selection and classification methods using gene expression profiles and proteomic patterns. Genome informatics, 13:51–60, 2002.
[20]
M. Mitchell, K. Brown, M. Morris-Villagran, and P. Villagran. The effects of anger, sadness and happiness on persuasive message processing: A test of the negative state relief model. Communication Monographs, 68(4):347–359, 2001.
[21]
D. Ogilvy and R. Atherton. Confessions of an advertising man. Atheneum New York, 1963.
[22]
D. J. O’Keefe. Persuasion: Theory and research. Sage Publications, 2015.
[23]
S. Park, H. S. Shim, M. Chatterjee, K. Sagae, and L. P. Morency. Computational analysis of persuasiveness in social multimedia: A novel dataset and multimodal prediction approach. In Proceedings of the 16th International Conference on Multimodal Interaction, pages 50–57. ACM, 2014.
[24]
R. M. Perloff. The dynamics of persuasion: communication and attitudes in the twenty-first century. Routledge, 2010.
[25]
K. R. Scherer, H. London, and J. J. Wolf. The voice of confidence: Paralinguistic cues and audience evaluation. Journal of Research in Personality, 7(1):31–44, 1973.
[26]
B. Siddiquie, D. Chisholm, and A. Divakaran. Exploiting multimodal affect and semantics to identify politically persuasive web videos. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pages 203–210. ACM, 2015.
[27]
K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[28]
C. G. M. Snoek, M. Worring, and A. W. M. Smeulders. Early versus late fusion in semantic video analysis. In Proceedings of the 13th Annual ACM International Conference on Multimedia, MULTIMEDIA ’05, pages 399–402, New York, NY, USA, 2005. ACM.
[29]
N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014.
[30]
A. Stuhlsatz, C. Meyer, F. Eyben, T. ZieIke, G. Meier, and B. Schuller. Deep neural networks for acoustic emotion recognition: raising the benchmarks. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEE, 2011.

Cited By

View all
  • (2024)Pathogenomics for accurate diagnosis, treatment, prognosis of oncology: a cutting edge overviewJournal of Translational Medicine10.1186/s12967-024-04915-322:1Online publication date: 3-Feb-2024
  • (2024)Modality-collaborative Transformer with Hybrid Feature Reconstruction for Robust Emotion RecognitionACM Transactions on Multimedia Computing, Communications, and Applications10.1145/364034320:5(1-23)Online publication date: 11-Jan-2024
  • (2024)Multimodal Fusion Assisted Mmwave Beam Training in Dual-Model NetworksIEEE Transactions on Vehicular Technology10.1109/TVT.2023.330809373:1(995-1011)Online publication date: Jan-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal Interaction
October 2016
605 pages
ISBN:9781450345569
DOI:10.1145/2993148
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 31 October 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Persuasiveness
  2. deep neural networks
  3. multimodal fusion

Qualifiers

  • Short-paper

Conference

ICMI '16
Sponsor:

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)125
  • Downloads (Last 6 weeks)8
Reflects downloads up to 15 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Pathogenomics for accurate diagnosis, treatment, prognosis of oncology: a cutting edge overviewJournal of Translational Medicine10.1186/s12967-024-04915-322:1Online publication date: 3-Feb-2024
  • (2024)Modality-collaborative Transformer with Hybrid Feature Reconstruction for Robust Emotion RecognitionACM Transactions on Multimedia Computing, Communications, and Applications10.1145/364034320:5(1-23)Online publication date: 11-Jan-2024
  • (2024)Multimodal Fusion Assisted Mmwave Beam Training in Dual-Model NetworksIEEE Transactions on Vehicular Technology10.1109/TVT.2023.330809373:1(995-1011)Online publication date: Jan-2024
  • (2024)SpeechMirror: A Multimodal Visual Analytics System for Personalized Reflection of Online Public Speaking EffectivenessIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.332693230:1(606-616)Online publication date: 1-Jan-2024
  • (2024)Multimodal Boosting: Addressing Noisy Modalities and Identifying Modality ContributionIEEE Transactions on Multimedia10.1109/TMM.2023.330648926(3018-3033)Online publication date: 1-Jan-2024
  • (2024)Multimodal Reaction: Information Modulation for Cross-Modal Representation LearningIEEE Transactions on Multimedia10.1109/TMM.2023.329333526(2178-2191)Online publication date: 1-Jan-2024
  • (2024)A Transformer-Based Model With Self-Distillation for Multimodal Emotion Recognition in ConversationsIEEE Transactions on Multimedia10.1109/TMM.2023.327101926(776-788)Online publication date: 1-Jan-2024
  • (2024)Trustworthy Multimodal Fusion for Sentiment Analysis in Ordinal Sentiment SpaceIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.337656434:8(7657-7670)Online publication date: Aug-2024
  • (2024)Deep Tensor Evidence Fusion Network for Sentiment ClassificationIEEE Transactions on Computational Social Systems10.1109/TCSS.2022.319799411:4(4605-4613)Online publication date: Aug-2024
  • (2024)Automated Scoring of Asynchronous Interview Videos Based on Multi-Modal Window-Consistency FusionIEEE Transactions on Affective Computing10.1109/TAFFC.2023.329433515:3(799-814)Online publication date: Jul-2024
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media