QAMAF(Home) Brief Description Data and Documentation

QAMAF system and dataset


What is the QAMAF System:
QAMAF System is a Quality Adaptive Multimodal Affect Recognition System.
In this work we developed a QAMAF for User-Centric Multimedia Indexing.
The developed QAMAF is tested on a hard problem of 'recognition of human emotions evoked by weekly affective stimuli'.
The employed stmuli are 32-seconds-long piano snippets made by an algorithmic music composing system.
We devloped a cross-user QAMAF which is another novelty in the field.

Abstract of the paper (A show case for QAMAF):
The recent increase in interest for online multimedia streaming platforms has availed massive amounts of multimedia information for everyday users of such services. The available multimedia contents need to be indexed to be searchable and retrievable. Towards indexing multimedia contents, user-centric implicit affective indexing employing emotion detection based on psycho-physiological signals, such as electrocardiography (ECG), galvanic skin response (GSR), electroencephalography (EEG) and face tracking, has recently gained attention. However, real world psycho-physiological signals obtained from wearable devices and facial trackers are contaminated by various noise sources that can result in spurious emotion detection. Therefore, in this paper we propose the development of psycho-physiological signal quality estimators for uni-modal affect recognition systems. The presented quality adaptive unimodal affect recognition systems performed adequately in classifying users affect however, they resulted in high failure rates due to rejection of bad quality samples. Thus, to reduce the affect recognition failure rate, a quality adaptive multimodal fusion scheme is proposed. The proposed quality adaptive multimodal fusion ended up having no failure, while classifying users' arousal/valence and liking with significantly above chance weighted F1-scores in a cross-user scheme. Another finding of this study is that head movements encode liking perception of users in response to music snippets. This work also includes the release of the employed dataset including psycho-physiological signals, their quality annotations, and users' affective self-assessments.


The paper is published in the proceedings of
Annual ACM International Conference on Multimedia Retrieval (ICMR), June 2016, New York:
“A Quality Adaptive Multimodal Affect Recognition System for User-Centric Multimedia Indexing”,
R. Gupta, M. Khomami Abadi, J. A. Cárdenes Cabré, F. Morreale, T. H. Falk, N. Sebe,

To get access to the paper, please click here!



The Dataset that is used for the show case of cross-user QAMAF in the paper:

To get access to the data, please visit the 'Data and Documentation' page!