Annotating emotions with crowd-sourcing to improve video recommenders

From Master Projects
Jump to: navigation, search

has title::Crowdsourcing for Emotion Annotation for Improving Video Recommenders
status: finished
Master: project within::Information Sciences
Student name: student name::Wouter Meys
Start start date:=2012/04/02
End end date:=2012/07/19
Supervisor: Michiel Hildebrand
Second supervisor: Lora Aroyo
Thesis: has thesis::Media:WouterMeys.pdf
Poster: has poster::Media:Posternaam.pdf

Signature supervisor



In order to create video recommender tools that take sentiment into account, we conducted a study to get more information on how people describe sentiment in a video and how we can automatically detect sentiment in a video by making use of a video labeling game. First, we did a experiment on how people classified the sentiment of a video fragment by manual explicit sentiment judgments. Secondly, we used a video labeling game to classify sentiment in videos by using words. After these two experiments were competed, we also com- pared the results of the two experiments to study if there was a correlation between the results. The results showed that, in most of the cases there was inter-rater disagreement in the manual sentiment judgments. For the results of the video labeling game we found, that we could classify sentiment using words from the video labeling game by making use of a word list that contained known sentiment scores for every word. Users preferred to use words that were classified as positive or neutral. Users responded strongly to stimulation we offered in making changes to the interface, and award- ing them with more points when they entered words with a known sentiment score. Finally we could detect a similar spread and correlation in the classification of the manual sentiment judgments and the classification we created based on automatic sentiment detection.