AFFEC Multimodal Dataset
Gespeichert in:
| Titel: | AFFEC Multimodal Dataset |
|---|---|
| Autoren: | IT University of Copenhagen, orcid:0000-0002-1271-, Jamshidi Seikavandi, Meisam, Dixen, Laurits, Burelli, Paolo |
| Verlagsinformationen: | Zenodo |
| Publikationsjahr: | 2025 |
| Bestand: | Zenodo |
| Schlagwörter: | Expressed Emotion/classification, Emotions/classification, Expressed Emotion/physiology, Electroencephalography, Electroencephalography/classification, Face/physiology, Social Interaction, Social Interaction/classification, Galvanic Skin Response, Galvanic Skin Response/physiology, Pupil/physiology, Eye Tracker |
| Beschreibung: | Dataset: AFFEC - Advancing Face-to-Face Emotion Communication Dataset Overview The AFFEC (Advancing Face-to-Face Emotion Communication) dataset is a multimodal dataset designed for emotion recognition research. It captures dynamic human interactions through electroencephalography (EEG), eye-tracking, galvanic skin response (GSR), facial movements, and self-annotations, enabling the study of felt and perceived emotions in real-world face-to-face interactions. The dataset comprises 84 simulated emotional dialogues, 72 participants, and over 5,000 trials, annotated with more than 20,000 emotion labels. Dataset Structure The dataset follows the Brain Imaging Data Structure (BIDS) format and consists of the following components: Root Folder: sub-* : Individual subject folders (e.g., sub-aerj, sub-mdl, sub-xx2) dataset_description.json: General dataset metadata participants.json and participants.tsv: Participant demographics and attributes task-fer_events.json: Event annotations for the FER task README.md: This documentation file Subject Folders (sub- ): Each subject folder contains: Behavioral Data (beh/): Physiological recordings (eye tracking, GSR, facial analysis, cursor tracking) in JSON and TSV formats. EEG Data (eeg/): EEG recordings in .edf and corresponding metadata in .json. Event Files (*.tsv): Trial event data for the emotion recognition task. Channel Descriptions (*_channels.tsv): EEG channel information. Data Modalities and Channels 1. Eye Tracking Data Channels: 16 (fixation points, left/right eye gaze coordinates, gaze validity) Sampling Rate: 62 Hz Trials: 5632 File Example: sub- _task-fer_run-0_recording-gaze_physio.json 2. Pupil Data Channels: 21 (pupil diameter, eye position, pupil validity flags) Sampling Rate: 149 Hz Trials: 5632 File Example: sub- _task-fer_run-0_recording-pupil_physio.json 3. Cursor Tracking Data Channels: 4 (cursor X, cursor Y, cursor state) Sampling Rate: 62 Hz Trials: 5632 File Example: sub- _task-fer_run-0_recording-cursor_physio.json 4. Face Analysis Data Channels: Over ... |
| Publikationsart: | dataset |
| Sprache: | unknown |
| Relation: | https://zenodo.org/communities/itubrainlab/; https://zenodo.org/records/14794876; oai:zenodo.org:14794876; https://doi.org/10.5281/zenodo.14794876 |
| DOI: | 10.5281/zenodo.14794876 |
| Verfügbarkeit: | https://doi.org/10.5281/zenodo.14794876 https://zenodo.org/records/14794876 |
| Rights: | Creative Commons Attribution 4.0 International ; cc-by-4.0 ; https://creativecommons.org/licenses/by/4.0/legalcode |
| Dokumentencode: | edsbas.96D50C93 |
| Datenbank: | BASE |
| Abstract: | Dataset: AFFEC - Advancing Face-to-Face Emotion Communication Dataset Overview The AFFEC (Advancing Face-to-Face Emotion Communication) dataset is a multimodal dataset designed for emotion recognition research. It captures dynamic human interactions through electroencephalography (EEG), eye-tracking, galvanic skin response (GSR), facial movements, and self-annotations, enabling the study of felt and perceived emotions in real-world face-to-face interactions. The dataset comprises 84 simulated emotional dialogues, 72 participants, and over 5,000 trials, annotated with more than 20,000 emotion labels. Dataset Structure The dataset follows the Brain Imaging Data Structure (BIDS) format and consists of the following components: Root Folder: sub-* : Individual subject folders (e.g., sub-aerj, sub-mdl, sub-xx2) dataset_description.json: General dataset metadata participants.json and participants.tsv: Participant demographics and attributes task-fer_events.json: Event annotations for the FER task README.md: This documentation file Subject Folders (sub- ): Each subject folder contains: Behavioral Data (beh/): Physiological recordings (eye tracking, GSR, facial analysis, cursor tracking) in JSON and TSV formats. EEG Data (eeg/): EEG recordings in .edf and corresponding metadata in .json. Event Files (*.tsv): Trial event data for the emotion recognition task. Channel Descriptions (*_channels.tsv): EEG channel information. Data Modalities and Channels 1. Eye Tracking Data Channels: 16 (fixation points, left/right eye gaze coordinates, gaze validity) Sampling Rate: 62 Hz Trials: 5632 File Example: sub- _task-fer_run-0_recording-gaze_physio.json 2. Pupil Data Channels: 21 (pupil diameter, eye position, pupil validity flags) Sampling Rate: 149 Hz Trials: 5632 File Example: sub- _task-fer_run-0_recording-pupil_physio.json 3. Cursor Tracking Data Channels: 4 (cursor X, cursor Y, cursor state) Sampling Rate: 62 Hz Trials: 5632 File Example: sub- _task-fer_run-0_recording-cursor_physio.json 4. Face Analysis Data Channels: Over ... |
|---|---|
| DOI: | 10.5281/zenodo.14794876 |
Nájsť tento článok vo Web of Science