EEG-based Auditory Attention Decoding (AAD) Challenge focuses on directional attention, aiming to identify the direction of the attended speaker in multi-speaker environments from electroencephalography (EEG) signals. Despite significant progress, a key limitation of current AAD studies is their poor generalization to unseen subjects and sessions. This is often caused by limited data, trial-dependent settings, and a lack of multi-modal stimulus data.
To address these issues and promote robust AAD research, this EEG-AAD challenge revolves around two key problems in AAD: cross-subject and cross-session tasks in multi-scenario settings. The ultimate goal is to promote the generalizability of methods and their applications in real-world scenarios. This competition will provide participants with the first multi-modal auditory attention dataset that includes audio-visual stimuli designed to simulate real-world scenes. The dataset contains approximately 4,400 minutes of EEG signals collected from 40 different subjects during two experimental sessions conducted under both audio-visual and audio-only scenes. We hope this challenge will lead to new methodological innovations in AAD and the advancement of generalizable models for practical applications such as hearing aids.