The Emotions of the Crowd

Learning Image Sentiment from Tweets via Cross-modal Distillation

ECAI 2023

Alessio Serra1, Fabio Carrara2, Maurizio Tesconi3, Fabrizio Falchi2,
1Università di Pisa 2ISTI CNR 3IIT CNR
Method Overview

Our method boils down to the following three steps:

  1. Training Data Collection: We collect and deduplicate multimodal (text + images) data from public social media posts.
  2. Cross-modal Distillation: We perform cross-modal knowledge distillation from a frozen textual teacher model to a student visual model using coupled (text, images) samples.
  3. Evaluation Phase: We evaluate the trained visual model on visual-only sentiment analysis.

Abstract

Trends and opinion mining in social media increasingly focus on novel interactions involving visual media, like images and short videos, in addition to text.

In this work, we tackle the problem of visual sentiment analysis of social media images — specifically, the prediction of image sentiment polarity. While previous work relied on manually labeled training sets, we propose an automated approach for building sentiment polarity classifiers based on a cross-modal distillation paradigm; starting from scraped multimodal (text + images) data, we train a student model on the visual modality based on the outputs of a textual teacher model that analyses the sentiment of the corresponding textual modality.

We applied our method to randomly collected images crawled from Twitter over three months and produced, after automatic cleaning, a weakly-labeled dataset of ∼1.5 million images. Despite exploiting noisy labeled samples, our training pipeline produces classifiers showing strong generalization capabilities and outperforming the current state of the art on five manually labeled benchmarks for image sentiment polarity prediction.

Examples

Step 2: Evaluation

Cherry-picked examples of predictions of our best model (ViT-L/16) on the Twitter Dataset benchmark. The first two rows contain correctly classified images with positive and negative sentiment polarity, respectively. The third row contains negative-labeled samples misclassified as positives, and the last row contains positive-labeled ones misclassified as negatives. Note that several misclassified samples appear ambiguous even to a human labeler due to personal sensibility.

Results

Table 3. Ablation study. Accuracy of sentiment prediction on the three Twitter Dataset benchmarks (at-least-five-, four-, and three-agreement subsets). Experiments 3.1 – 3.2 show the effects of confidence filtering. Confidence filtering columns report the values used for the parameters { c j } j = 1 3 \{c_j\}^3_{j=1} in Equation 2. Experiments 3.4 – 3.5 investigate training data collection period and scale. A = Set of tweets collected in Jul-Dec 2016 by [40]. B = Set of tweets collected in Apr-Jun 2022 by us. Experiments 3.6 – 3.8 show the effect of model size and input patch size. The model column indicates the student architecture, i.e., a Base or Large ViT with input patch size of 32 or 16.
# Dataset Confidence Filter Student Model Twitter Dataset
🙂 😐 🙁 5 agree ≥4 agree ≥3 agree
3.1 A - - - B/32 82.2 78.0 75.5
3.2 A 0.70 0.70 0.70 B/32 84.7 79.7 76.6
3.3 B 0.70 0.70 0.70 B/32 82.3 78.7 75.3
3.4 B 0.90 0.90 0.70 B/32 84.0 80.3 77.1
3.5 A+B 0.90 0.90 0.70 B/32 86.5 82.6 78.9
3.6 A+B 0.90 0.90 0.70 L/32 85.0 82.4 79.4
3.7 A+B 0.90 0.90 0.70 B/16 87.0 83.1 81.0
3.8 A+B 0.90 0.90 0.70 L/16 87.8 84.8 81.9
Table 4. Accuracy on standard benchmarks for visual-only image sentiment polarity prediction compared with state-of-the-art predictors.
Model Twitter Dataset Emotion ROI FI
5 agree ≥4 agree ≥3 agree
Chen et al. [10]* 76.4 70.2 71.3 70.1 61.5
You et al. [43]* 82.5 76.5 76.4 73.6 75.3
Jou et al. [17]† 83.9±0.3
Vadicamo et al. [40] 89.6 86.6 82.0
Yang et al. [42]* 88.7 87.1 81.1 81.3 86.4
Wu et al. [41] 89.5 87.0 81.7 83.0 88.8
Ours (ViT-L/16, zero-shot) 87.8 84.8 81.9 64.1 76.0
Ours (ViT-L/16, fine-tuned) 92.4±2.0 90.2±2.0 86.3±3.0 83.9±1.0 89.4±0.1
*As reported by [41]. †As reported by [8].

Dataset

We provide the filtered data described in Table 3 of the paper (Experiments 3.1 – 3.8). Drop an email to request access:

fabio[dot]carrara[at]isti[dot]cnr[dot]it

BibTeX

@inproceedings{serra2023emotions,
  author    = {Serra, Alessio and Carrara, Fabio and Tesconi, Maurizio and Falchi, Fabrizio},
  editor       = {Kobi Gal and Ann Now{\'{e}} and Grzegorz J. Nalepa and Roy Fairstein and Roxana Radulescu},
  title        = {The Emotions of the Crowd: Learning Image Sentiment from Tweets via Cross-Modal Distillation},
  booktitle    = {{ECAI} 2023 - 26th European Conference on Artificial Intelligence, September 30 - October 4, 2023, Krak{\'{o}}w, Poland - Including 12th Conference on Prestigious Applications of Intelligent Systems ({PAIS} 2023)},
  series       = {Frontiers in Artificial Intelligence and Applications},
  volume       = {372},
  pages        = {2089--2096},
  publisher    = {{IOS} Press},
  year         = {2023},
  url          = {https://doi.org/10.3233/FAIA230503},
  doi          = {10.3233/FAIA230503},
}

Acknowledgements

AI4Media Project Logo This work has received financial support from the European Union's Horizon 2020 Research & Innovation Programme under Grand agreement N. 951911 (AI4Media - A European Excellence Centre for Media, Society and Democracy).


SUN Project Logo This work has received financial support by the Horizon Europe Research & Innovation Programme under Grant agreement N. 101092612 (Social and hUman ceNtered XR - SUN project).