Data and Documentation
Please Read Me!
Please make sure that you consider the following hints before downloading the shared files:
- The data is shared only for research purposes.
- Commercial-related use of the data is not permitted.
- If you use any part of the shared data in any report, please make sure that you cite the "QAMAF paper"
in the report.
Thanks for your consideration!
We use the Google Drive service to share the extracted features
and the preprocessed data
. To grant you an access to the (extracted features
or/and the pre-processed
) data, a Google id associated with your identity is needed (please see the EULA).
The EULA should be downloaded
, printed, signed, scanned and returned via email to qamaf.mhug [at] gmail [dot] com
with the subject line "QAMAF access request"
. Please state in your email your position and your institution. Please use your institutional email
(i.e. not your IBM, Microsoft, Google, etc account, unless of course you work in IBM, Microsoft, Google, etc) to submit your access request.
Download the EULA!
The Extracted Features that are explained in the paper and used for experimental analysis are shared via this page. We used MATLAB/Python for signal analysis.
The shared files have ".mat" and ".csv" extensions and can be read using MATLAB/OCTAVE.
The features and data for trial classification include:
Features from each modality for signal quality estimation (SQE)
Features from each modality for affect recognition
Aggregation of expert signal quality annotations
Users' self-assessments and the order of stimuli presentation
The signal data files are in '.csv' and '.mat' format. The video files* have ".avi" extensions.
The pre-processed/raw data includes:
Physiology signal, sorted according to users and stimuli ids
tracks of facial landmarks and head-pose, sorted according to users and stimuli ids
The employed stimlui
* Due to the large size of the facial videos, we can share them (upon request) through a separate procedure.
We provide detailes of the shared data to help QAMAF users understand better and faster the data structures.
To download the data and access to the documentation, please click here!