Affective Computing

DECAF DATASET

DECAF: MEG-based Multimodal Database for Decoding Affective Physiological Response

In this work, we present DECAF–a multimodal dataset for decoding user physiological responses to affective multimedia content. Different from datasets such as DEAP and MAHNOB-HCI, DECAF contains (1) Brain signals acquired using the Magnetoencephalogram (MEG) sensor, which requires little physical contact with the user’s scalp and consequently facilitates naturalistic affective response, and (2) Explicit and implicit emotional responses of 30 participants to 40 one-minute music video segments used in DEAP and 36 movie clips, thereby enabling comparisons between the EEG vs MEG modalities as well as movie vs music stimuli for affect recognition. In addition to MEG data, DECAF comprises synchronously recorded near-infra-red (NIR) facial videos, horizontal Electrooculogram (hEOG), Electrocardiogram (ECG), and trapezius-Electromyogram (tEMG) peripheral physiological responses. To demonstrate DECAF’s utility, we present (i) a detailed analysis of the correlations between participants’ self-assessments and their physiological responses and (ii) single-trial classification results for valence, arousal and dominance, with performance evaluation against existing datasets. DECAF also contains time continuous emotion annotations for movie clips from seven users, which we use to demonstrate dynamic emotion prediction.

To get access to the dataset, please click here!


QAMAF System and Dataset

The recent increase in interest for online multimedia streaming platforms has availed massive amounts of multimedia information for everyday users of such services. The available multimedia contents need to be indexed to be searchable and retrievable. Towards indexing multimedia contents, user-centric implicit affective indexing employing emotion detection based on psycho-physiological signals, such as electrocardiography (ECG), galvanic skin response (GSR), electroencephalography (EEG) and face tracking, has recently gained attention. However, real world psycho-physiological signals obtained from wearable devices and facial trackers are contaminated by various noise sources that can result in spurious emotion detection. Therefore, in this paper we propose the development of psycho-physiological signal quality estimators for uni-modal affect recognition systems. The presented quality adaptive unimodal affect recognition systems performed adequately in classifying users affect however, they resulted in high failure rates due to rejection of bad quality samples. Thus, to reduce the affect recognition failure rate, a quality adaptive multimodal fusion scheme is proposed. The proposed quality adaptive multimodal fusion ended up having no failure, while classifying users’ arousal/valence and liking with significantly above chance weighted F1-scores in a cross-user scheme. Another finding of this study is that head movements encode liking perception of users in response to music snippets. This work also includes the release of the employed dataset including psycho-physiological signals, their quality annotations, and users’ affective self-assessments.

To get access to the dataset, please click here!