Data Mining to Detect Temporal Features in Video

From Master Projects
Jump to: navigation, search


has title::Data Mining to Detect Temporal Features in Video
status: finished
Master: project within::Computational Intelligence and Selforganisation
Student name: student name::Koen Pasman
Dates
Start start date:=2012/02/01
End end date:=2012/07/31
Supervision
Supervisor: Evert Haasdijk
Second reader: has second reader::Zoltán Szlávik
Company: has company::Sentient
Thesis: has thesis::Media:Thesis.pdf
Poster: has poster::Media:Posternaam.pdf

Signature supervisor



..................................

Abstract

The main question of this research will be: “Can we detect temporal features in video?”. To make this work we further specify this question by focusing on one particular temporal feature we would like to be able to detect: engagement versus detachment. To do this, we use observation of a person as a paradigm. We will use data from video sources to mine for engagement. The idea is that FaceReader (from Vicar Vision) is able to extract a number of features from each frame, these features will be used as the data we are going to mine. We would like to determine the underlying principles of engagement bottom-up, therefore we will mine on the (supervised) data from FaceReader and see which factors are most influential on engagement. Besides being able to label someone as either engaged or detached there might also be a number of correlations between (temporal) features. We would like to use maximal information-based nonparametric exploration statistics (MINE) such as the maximal information coefficient (MIC). These techniques are useful when looking for undiscovered relationships in large data sets. The data used is data that is produced in-house at Vicar Vision. Video of women watching four different movies will be used. The data is available, but is not yet labeled (engaged/detached). If this proves to be unusable, we could use the Patria 2 data or find another suitable source of data.