The field of affective video content analysis, which aims to estimate viewers’ emotions evoked from a input video, is growing as the amount of online video content increases. However, annotating videos with emotions is challenging due to the subjective and ambiguous nature of emotions. This research introduces the Label Distribution Learning (LDL) paradigm to limit the impact of subjectivity by modeling the label of evoked emotions as a distribution rather than a single dominant emotion. In addition, an approach to automatically annotate the viewers’ emotion distribution based on user-generated comments instead of annotating them manually is proposed. A video dataset with emotion distribution annotations is composed using this method. An Evoked Emotion Distribution Learning (EEDL) model is adopted to estimate the emotion distribution evoked from social media videos. Experiments using the proposed EEDL model on the composed dataset show promising prospect for using LDL in this task.
Type: Talk at Meeting of the Technical Committee on Media Experience and Virtual Environment, MVE (メディアエクスペリエンス・バーチャル環境基礎研究会)
Publication date: March 2023