Table of Links Abstract and 1 Introduction Abstract and 1 Introduction 2 MindEye2 and 2.1 Shared-Subject Functional Alignment 2 MindEye2 and 2.1 Shared-Subject Functional Alignment 2.2 Backbone, Diffusion Prior, & Submodules 2.2 Backbone, Diffusion Prior, & Submodules 2.3 Image Captioning and 2.4 Fine-tuning Stable Diffusion XL for unCLIP 2.3 Image Captioning and 2.4 Fine-tuning Stable Diffusion XL for unCLIP 2.5 Model Inference 2.5 Model Inference 3 Results and 3.1 fMRI-to-Image Reconstruction 3 Results and 3.1 fMRI-to-Image Reconstruction 3.2 Image Captioning 3.2 Image Captioning 3.3 Image/Brain Retrieval and 3.4 Brain Correlation 3.3 Image/Brain Retrieval and 3.4 Brain Correlation 3.5 Ablations 3.5 Ablations 4 Related Work 4 Related Work 5 Conclusion 5 Conclusion 6 Acknowledgements and References 6 Acknowledgements and References A Appendix A Appendix A.1 Author Contributions A.1 Author Contributions A.2 Additional Dataset Information A.2 Additional Dataset Information A.3 MindEye2 (not pretrained) vs. MindEye1 A.3 MindEye2 (not pretrained) vs. MindEye1 A.4 Reconstruction Evaluations Across Varying Amounts of Training Data A.4 Reconstruction Evaluations Across Varying Amounts of Training Data A.5 Single-Subject Evaluations A.5 Single-Subject Evaluations A.6 UnCLIP Evaluation A.6 UnCLIP Evaluation A.7 OpenCLIP BigG to CLIP L Conversion A.7 OpenCLIP BigG to CLIP L Conversion A.8 COCO Retrieval A.8 COCO Retrieval A.9 Reconstruction Evaluations: Additional Information A.9 Reconstruction Evaluations: Additional Information A.10 Pretraining with Less Subjects A.10 Pretraining with Less Subjects A.11 UMAP Dimensionality Reduction A.11 UMAP Dimensionality Reduction A.12 ROI-Optimized Stimuli A.12 ROI-Optimized Stimuli A.13 Human Preference Experiments A.13 Human Preference Experiments A.2 Additional Dataset Information fMRI responses correspond to normalized single-trial betas output from GLMSingle (Prince et al., 2022). We use preprocessed flattened fMRI voxels in 1.8-mm native volume space corresponding to the “nsdgeneral” brain region, defined by the NSD authors as the subset of voxels in posterior cortex most responsive to the visual stimuli presented (between 13,000 to 16,000 voxels per participant). MindEye2 was developed using a training and test set of subject 1’s data, with other subjects’ data untouched until final training of models. The fMRI data from both the training and test set was normalized using a voxel-wise Z-scoring procedure using the mean and standard deviation calculated using only the training set. Despite the shared1000 test trials being distributed across the scanning sessions for each subject, we chose to keep the test set consistent no matter the number of sessions being used for training. We also adjusted the number of training sessions after the normalization step, allowing us to keep the statistical properties of the shared1000 test set consistent between experiments with varying amounts of training data. This may inadvertently give a small normalization advantage to models trained with fewer training sessions, as the models are normalized with additional data not made available for training. This paper is available on arxiv under CC BY 4.0 DEED license. This paper is available on arxiv under CC BY 4.0 DEED license. available on arxiv Authors: (1) Paul S. Scotti, Stability AI and Medical AI Research Center (MedARC); (2) Mihir Tripathy, Medical AI Research Center (MedARC) and a Core contribution; (3) Cesar Kadir Torrico Villanueva, Medical AI Research Center (MedARC) and a Core contribution; (4) Reese Kneeland, University of Minnesota and a Core contribution; (5) Tong Chen, The University of Sydney and Medical AI Research Center (MedARC); (6) Ashutosh Narang, Medical AI Research Center (MedARC); (7) Charan Santhirasegaran, Medical AI Research Center (MedARC); (8) Jonathan Xu, University of Waterloo and Medical AI Research Center (MedARC); (9) Thomas Naselaris, University of Minnesota; (10) Kenneth A. Norman, Princeton Neuroscience Institute; (11) Tanishq Mathew Abraham, Stability AI and Medical AI Research Center (MedARC). Authors: Authors: (1) Paul S. Scotti, Stability AI and Medical AI Research Center (MedARC); (2) Mihir Tripathy, Medical AI Research Center (MedARC) and a Core contribution; (3) Cesar Kadir Torrico Villanueva, Medical AI Research Center (MedARC) and a Core contribution; (4) Reese Kneeland, University of Minnesota and a Core contribution; (5) Tong Chen, The University of Sydney and Medical AI Research Center (MedARC); (6) Ashutosh Narang, Medical AI Research Center (MedARC); (7) Charan Santhirasegaran, Medical AI Research Center (MedARC); (8) Jonathan Xu, University of Waterloo and Medical AI Research Center (MedARC); (9) Thomas Naselaris, University of Minnesota; (10) Kenneth A. Norman, Princeton Neuroscience Institute; (11) Tanishq Mathew Abraham, Stability AI and Medical AI Research Center (MedARC).