EEG-AAD 2026: EEG Auditory Attention Decoding Challenge

Auditory attention decoding (AAD) challenge based on EEG
This challenge focuses on decoding individuals' auditory attention from EEG signals under multi-modal stimulus in multiple sound sources.

Rush In

Challenge Call

Challenge and Scenario Description

EEG-based Auditory Attention Decoding (AAD) Challenge focuses on directional attention, aiming to identify the direction of the attended speaker in multi-speaker environments from electroencephalography (EEG) signals. Despite significant progress, a key limitation of current AAD studies is their poor generalization to unseen subjects and sessions. This is often caused by limited data, trial-dependent settings, and a lack of multi-modal stimulus data.

To address these issues and promote robust AAD research, this EEG-AAD challenge revolves around two key problems in AAD: cross-subject and cross-session tasks in multi-scenario settings. The ultimate goal is to promote the generalizability of methods and their applications in real-world scenarios. This competition will provide participants with the first multi-modal auditory attention dataset that includes audio-visual stimuli designed to simulate real-world scenes. The dataset contains approximately 4,400 minutes of EEG signals collected from 40 different subjects during two experimental sessions conducted under both audio-visual and audio-only scenes. We hope this challenge will lead to new methodological innovations in AAD and the advancement of generalizable models for practical applications such as hearing aids.

Tasks Overview

Details about the Tracks to be completed

Task 1: Cross-subject

Cross-subject Task Illustration

This task challenges participants to build AAD models capable of decoding auditory attention categories from EEG signals across unseen subjects in audio-visual environments. Participants are provided with data from 30 subjects for training and validation, and data from 10 additional subjects for testing. The goal is to build a generalizable cross-subject decoding model, calculate the decoding accuracy of each subject in the test set, and finally obtain the average decoding accuracy(%).

Task 2: Cross-session

Cross-session Task Illustration

This task requires participants to build AAD models capable of decoding auditory attention categories from EEG signals of scenes that subjects have never seen before. During the training phase, EEG data from 30 subjects are provided for both the audio-only and audio-visual conditions, where the audio-only data are used for training and the corresponding audio-visual data are used for validation. Additionally, the audio-only data for the 10 test subjects are provided in advance during the training phase. In the testing phase, participants can simply apply the models trained on this data to predict the corresponding audio-visual data from the same subjects. The goal of the participants is to build a generalizable single-subject cross-session decoding model, calculate the decoding accuracy of each subject in the test set, and finally obtain the average decoding accuracy (%). The focus of this challenge is to evaluate the generalizability of EEG-AAD methods across unseen and sessions, employing more rigorous experimental paradigms and more challenging EEG data.

Data Description

Explore the datasets and evaluation metrics for the challenge

We provide a multi-modal AAD (MM-AAD) dataset consisting of EEG data collected from 40 subjects in two settings: audio-only and audio-visual scenes. Each subject was instructed to focus on one of two competing voices from the 90° left or right for an average of 55 minutes, resulting in a total of approximately 73.3 hours of data. Participants will build models to analyze the spatial orientation (left/right) of attention from EEG signals.

Scene Stimuli Signal EEG Channels Duration Speaker Gender Direction
Audio-visual Audio & Video 32 55 min Male & Female ±90°
Audio-only Audio 32 55 min Male & Female ±90°
Audio-visual Scene

Subjects watched videos while wearing EEG caps. They focused on specific speakers based on the given cues. Includes 55 minutes of EEG data per subject across 40 subjects.

Audio-only Scene

Subjects focused on audio stimuli while fixating on a crosshair. Attention direction was cued (left/right). Includes 55 minutes of EEG data per subject with attention validation via post-trial questions.

Note:Access information for the MM-AAD dataset will be announced soon.

Evaluation Metric

Understand how your results will be evaluated

Evaluation Formula

CorrectTestSamples: The number of test samples in a subject that correctly identify the direction of auditory attention.

TotalTestSamples: The total number of test samples given by a subject.

Subs: The total number of test subjects given by each track.

Registration Guidelines

Follow the steps below to complete your registration

Step 1: Registration

Teams wishing to participate in the challenge should register via the provided Registration Form . Please submit the following details for each participant:

  • Team name
  • Team member's name
  • Organization
  • Email address
  • Link to registration form: Registration Form
  • Step 2: Baseline Code and Dataset

    The experiments for both tracks will utilize the DARNet model (NeurIPS 2024) as a baseline. Access to the baseline code and relevant research paper can be found at the following links:

    Dataset Citation:
    Cunhang Fan, Hongyu Zhang, Qinke Ni, Jingjing Zhang, Jianhua Tao, Jian Zhou, Jiangyan Yi, Zhao Lv, Xiaopei Wu. Seeing helps hearing: A multi-modal dataset and a manba-based dual branch parallel network for auditory attention decoding[J]. Information Fusion, 2025: 102946.

    Paper Citation:
    Sheng Yan, Cunhang Fan, Hongyu Zhang, Xiaoke Yang, Jianhua Tao, and Zhao Lv. Darnet: Dual attention refinement network with spatiotemporal construction for auditory attention detection[C]//. Advances in Neural Information Processing Systems, 2024, 37: 31688-31707.

    Contact Information

    If you have any questions, please contact us at eegaad2026challenge@gmail.com.

    Challenge Timeline

    The tentative timeline for running the challenge

    The tentative timeline for running the challenge is as follows:

    1. [September 10, 2025] Challenge begins. Release of training data, validation data, baseline paper, and code.
    2. [November 10, 2025] Release of testing data.
    3. [November 24, 2025] Result submission deadline.
    4. [December 1, 2025] Release of challenge results and rankings.
    5. [December 7, 2025] 2-page Papers Due (by invitation only).
    6. [January 11, 2026] 2-page Paper Acceptance Notification.
    7. [January 18, 2026] Camera-ready 2-page Papers Due.

    Note: All deadlines are 11:59PM on the respective day as per US Pacific Time.

    License

    The MM-AAD dataset is only available for this EEG-AAD 2026 challenge and academic research. The following conditions require your compliance:

    • References to the MM-AAD dataset need to be included in any work using the dataset.
    • For the baseline research paper, please cite the paper listed on our website.
    • You may not use the MM-AAD dataset or any derivative works for other purposes.

    All rights not expressly granted to you are reserved by the organizers of this challenge.

    Organizers

    Cunhang Fan

    Cunhang Fan

    Anhui University, China
    Zhao Lv

    Zhao Lv

    Anhui University, China
    Jian Zhou

    Jian Zhou

    Anhui University, China
    Siqi Cai

    Siqi Cai

    Harbin Institute of Technology, China
    Jing Lu

    Jing Lu

    Nanjing University, China
    Jingdong Chen

    Jingdong Chen

    Northwestern Polytechnical University, China

    Secretariat

    Xiaoke Yang

    Xiaoke Yang

    Anhui University, China
    Xingguang Dong

    Xingguang Dong

    Anhui University, China
    Mengyuan Gao

    Mengyuan Gao

    Anhui University, China
    Hongyu Zhang

    Hongyu Zhang

    Anhui University, China
    Yuanming Zhang

    Yuanming Zhang

    Nanjing University, China
    Yayun Liang

    Yayun Liang

    Nanjing University, China