Maddsmr_shortclip912.mp4 (2024-2026)
For information on the larger video collection from which these clips are derived, see the MAD Dataset on Hugging Face .
Data and pre-trained models (like the TSM ResNet50 used in the study) are available on GitHub .
The BOLD signal tracks the internal temporal structure of these 3-second events, meaning early and late parts of the signal correspond to early and late parts of the video. maddsmr_shortclip912.mp4
To help you find more specific details, are you looking for the of the video clips (like frame rate or resolution) or the fMRI processing pipeline used in the paper?
The study identifies specific brain regions in the parietal and high-level visual cortex that correlate with how memorable a video clip is. 🎥 Related Resources For information on the larger video collection from
The video file is a specific stimulus from the BOLD Moments Dataset (BMD) , a large-scale fMRI dataset designed to study how the human brain processes short visual events.
The dataset contains 1,102 three-second naturalistic videos sampled from the Moments in Time (MiT) and Memento10k datasets. To help you find more specific details, are
The "complete paper" associated with this dataset and its corresponding video clips is: published in Nature Communications (July 2024). Paper & Dataset Overview