paint-brush
NOIR: Neural Signal Operated Intelligent Robots: Acknowledgments & Referencesby@escholar
112 reads

NOIR: Neural Signal Operated Intelligent Robots: Acknowledgments & References

tldt arrow

Too Long; Didn't Read

NOIR presents a groundbreaking BRI system enabling humans to control robots for real-world activities, but also raises concerns about decoding speed limitations and ethical risks. While challenges remain in skill library development, NOIR's potential in assistive technology and collaborative interaction signifies a significant step forward in human-robot collaboration.
featured image - NOIR: Neural Signal Operated Intelligent Robots: Acknowledgments & References
EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture

Authors:

(1) Ruohan Zhang, Department of Computer Science, Stanford University, Institute for Human-Centered AI (HAI), Stanford University & Equally contributed; [email protected];

(2) Sharon Lee, Department of Computer Science, Stanford University & Equally contributed; [email protected];

(3) Minjune Hwang, Department of Computer Science, Stanford University & Equally contributed; [email protected];

(4) Ayano Hiranaka, Department of Mechanical Engineering, Stanford University & Equally contributed; [email protected];

(5) Chen Wang, Department of Computer Science, Stanford University;

(6) Wensi Ai, Department of Computer Science, Stanford University;

(7) Jin Jie Ryan Tan, Department of Computer Science, Stanford University;

(8) Shreya Gupta, Department of Computer Science, Stanford University;

(9) Yilun Hao, Department of Computer Science, Stanford University;

(10) Ruohan Gao, Department of Computer Science, Stanford University;

(11) Anthony Norcia, Department of Psychology, Stanford University

(12) Li Fei-Fei, 1Department of Computer Science, Stanford University & Institute for Human-Centered AI (HAI), Stanford University;

(13) Jiajun Wu, Department of Computer Science, Stanford University & Institute for Human-Centered AI (HAI), Stanford University.

Abstract & Introduction

Brain-Robot Interface (BRI): Background

The NOIR System

Experiments

Results

Conclusion, Limitations, and Ethical Concerns

Acknowledgments & References

Appendix 1: Questions and Answers about NOIR

Appendix 2: Comparison between Different Brain Recording Devices

Appendix 3: System Setup

Appendix 4: Task Definitions

Appendix 5: Experimental Procedure

Appendix 6: Decoding Algorithms Details

Appendix 7: Robot Learning Algorithm Details

Acknowledgments

The work is in part supported by NSF CCRI #2120095, ONR MURI N00014-22-1-2740, N00014- 23-1-2355, N00014-21-1-2801, AFOSR YIP FA9550-23-1-0127, the Stanford Institute for HumanCentered AI (HAI), Amazon, Salesforce, and JPMC.

References

[1] A. Hussein, M. M. Gaber, E. Elyan, and C. Jayne. Imitation learning: A survey of learning methods. ACM Computing Surveys (CSUR), 50(2):1–35, 2017.


[2] R. Zhang, D. Bansal, Y. Hao, A. Hiranaka, J. Gao, C. Wang, R. Mart´ın-Mart´ın, L. Fei-Fei, and J. Wu. A dual representation framework for robot learning with human guidance. In Conference on Robot Learning, pages 738–750. PMLR, 2023.


[3] L. Guan, M. Verma, S. S. Guo, R. Zhang, and S. Kambhampati. Widening the pipeline in human-guided reinforcement learning with explanation and context-aware data augmentation. Advances in Neural Information Processing Systems, 34:21885–21897, 2021.


[4] H. Admoni and B. Scassellati. Social eye gaze in human-robot interaction: a review. Journal of Human-Robot Interaction, 6(1):25–63, 2017.


[5] A. Saran, R. Zhang, E. S. Short, and S. Niekum. Efficiently guiding imitation learning agents with human gaze. In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, pages 1109–1117, 2021.


[6] R. Zhang, Z. Liu, L. Zhang, J. A. Whritner, K. S. Muller, M. M. Hayhoe, and D. H. Ballard. Agil: Learning attention from human for visuomotor tasks. In Proceedings of the european conference on computer vision (eccv), pages 663–679, 2018.


[7] R. Zhang, A. Saran, B. Liu, Y. Zhu, S. Guo, S. Niekum, D. Ballard, and M. Hayhoe. Human gaze assisted artificial intelligence: A review. In IJCAI: Proceedings of the Conference, volume 2020, page 4951. NIH Public Access, 2020.


[8] Y. Cui, Q. Zhang, B. Knox, A. Allievi, P. Stone, and S. Niekum. The empathic framework for task learning from implicit human feedback. In Conference on Robot Learning, pages 604–626. PMLR, 2021.


[9] A. Brohan, Y. Chebotar, C. Finn, K. Hausman, A. Herzog, D. Ho, J. Ibarz, A. Irpan, E. Jang, R. Julian, et al. Do as i can, not as i say: Grounding language in robotic affordances. In Conference on Robot Learning, pages 287–318. PMLR, 2023.


[10] W. Huang, C. Wang, R. Zhang, Y. Li, J. Wu, and L. Fei-Fei. Voxposer: Composable 3d value maps for robotic manipulation with language models. In 7th Annual Conference on Robot Learning, 2023.


[11] R. Zhang, F. Torabi, L. Guan, D. H. Ballard, and P. Stone. Leveraging human guidance for deep reinforcement learning tasks. arXiv preprint arXiv:1909.09906, 2019.


[12] R. Zhang, F. Torabi, G. Warnell, and P. Stone. Recent advances in leveraging human guidance for sequential decision-making tasks. Autonomous Agents and Multi-Agent Systems, 35(2):31, 2021.


[13] I. Akinola, Z. Wang, J. Shi, X. He, P. Lapborisuth, J. Xu, D. Watkins-Valls, P. Sajda, and P. Allen. Accelerated robot learning via human brain signals. In 2020 IEEE international conference on robotics and automation (ICRA), pages 3799–3805. IEEE, 2020.


[14] Z. Wang, J. Shi, I. Akinola, and P. Allen. Maximizing bci human feedback using active learning. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 10945–10951. IEEE, 2020.


[15] I. Akinola, B. Chen, J. Koss, A. Patankar, J. Varley, and P. Allen. Task level hierarchical system for bci-enabled shared autonomy. In 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), pages 219–225. IEEE, 2017.


[16] A. F. Salazar-Gomez, J. DelPreto, S. Gil, F. H. Guenther, and D. Rus. Correcting robot mistakes in real time using eeg signals. In 2017 IEEE international conference on robotics and automation (ICRA), pages 6570–6577. IEEE, 2017.


[17] L. Schiatti, J. Tessadori, N. Deshpande, G. Barresi, L. C. King, and L. S. Mattos. Human in the loop of robot learning: Eeg-based reward signal for target identification and reaching task. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 4473–4480. IEEE, 2018.


[18] M. Aljalal, R. Djemal, and S. Ibrahim. Robot navigation using a brain computer interface based on motor imagery. Journal of Medical and Biological Engineering, 39:508–522, 2019.


[19] Y. Xu, C. Ding, X. Shu, K. Gui, Y. Bezsudnova, X. Sheng, and D. Zhang. Shared control of a robotic arm using non-invasive brain–computer interface and computer vision guidance. Robotics and Autonomous Systems, 115:121–129, 2019.


[20] X. Chen, B. Zhao, Y. Wang, S. Xu, and X. Gao. Control of a 7-dof robotic arm system with an ssvep-based bci. International journal of neural systems, 28(08):1850018, 2018.


[21] J. Meng, S. Zhang, A. Bekyo, J. Olsoe, B. Baxter, and B. He. Noninvasive electroencephalogram based control of a robotic arm for reach and grasp tasks. Scientific Reports, 6(1):38565, 2016.


[22] M. Aljalal, S. Ibrahim, R. Djemal, and W. Ko. Comprehensive review on brain-controlled mobile robots and robotic arms based on electroencephalography signals. Intelligent Service Robotics, 13:539–563, 2020.


[23] L. F. Nicolas-Alonso and J. Gomez-Gil. Brain computer interfaces, a review. sensors, 12(2): 1211–1279, 2012.


[24] L. Bi, X.-A. Fan, and Y. Liu. Eeg-based brain-controlled mobile robots: a survey. IEEE transactions on human-machine systems, 43(2):161–176, 2013.


[25] N. M. Krishnan, M. Mariappan, K. Muthukaruppan, M. H. A. Hijazi, and W. W. Kitt. Electroencephalography (eeg) based control in assistive mobile robots: A review. In IOP Conference Series: Materials Science and Engineering, volume 121, page 012017. IOP Publishing, 2016.


[26] E. D. Adrian and B. H. Matthews. The berger rhythm: potential changes from the occipital lobes in man. Brain, 57(4):355–385, 1934.


[27] C. J. Perera, I. Naotunna, C. Sadaruwan, R. A. R. C. Gopura, and T. D. Lalitharatne. Ssvep based bmi for a meal assistance robot. In 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages 002295–002300. IEEE, 2016.


[28] J. Ha, S. Park, C.-H. Im, and L. Kim. A hybrid brain–computer interface for real-life mealassist robot control. Sensors, 21(13):4578, 2021.


[29] J. Zhang and M. Wang. A survey on robots controlled by motor imagery brain-computer interfaces. Cognitive Robotics, 1:12–24, 2021.


[30] X. Mao, M. Li, W. Li, L. Niu, B. Xian, M. Zeng, and G. Chen. Progress in eeg-based brain robot interaction systems. Computational intelligence and neuroscience, 2017, 2017.


[31] J. Tang, A. LeBel, S. Jain, and A. G. Huth. Semantic reconstruction of continuous language from non-invasive brain recordings. Nature Neuroscience, pages 1–9, 2023.


[32] M. Minderer, A. Gritsenko, A. Stone, M. Neumann, D. Weissenborn, A. Dosovitskiy, A. Mahendran, A. Arnab, M. Dehghani, Z. Shen, X. Wang, X. Zhai, T. Kipf, and N. Houlsby. Simple open-vocabulary object detection with vision transformers, 2022.


[33] D. Zhu, J. Bieger, G. G. Molina, and R. M. Aarts. A survey of stimulation methods used in ssvep-based bcis. Computational intelligence and neuroscience, 2010:1–12, 2010.


[34] R. Kus, A. Duszyk, P. Milanowski, M. Łabecki, M. Bierzy ´ nska, Z. Radzikowska, M. Michal- ´ ska, J. Zygierewicz, P. Suffczy ˙ nski, and P. J. Durka. On the quantification of ssvep frequency ´ responses in human eeg in realistic bci conditions. PloS one, 8(10):e77536, 2013.


[35] L. Shao, L. Zhang, A. N. Belkacem, Y. Zhang, X. Chen, J. Li, and H. Liu. Eeg-controlled wall-crawling cleaning robot using ssvep-based brain-computer interface, 2020. [36] N. Padfield, J. Zabalza, H. Zhao, V. Masero, and J. Ren. Eeg-based brain-computer interfaces using motor-imagery: Techniques and challenges. Sensors, 19(6):1423, 2019.


[37] S.-L. Wu, C.-W. Wu, N. R. Pal, C.-Y. Chen, S.-A. Chen, and C.-T. Lin. Common spatial pattern and linear discriminant analysis for motor imagery classification. In 2013 IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), pages 146–151. IEEE, 2013.


[38] S. Kumar, A. Sharma, and T. Tsunoda. An improved discriminative filter bank selection approach for motor imagery eeg signal classification using mutual information. BMC bioinformatics, 18:125–137, 2017.


[39] Y. Zhang, Y. Wang, J. Jin, and X. Wang. Sparse bayesian learning for obtaining sparsity of eeg frequency bands based feature vectors in motor imagery classification. International journal of neural systems, 27(02):1650032, 2017.


[40] B. Yang, H. Li, Q. Wang, and Y. Zhang. Subject-based feature extraction by using fisher wpd-csp in brain–computer interfaces. Computer methods and programs in biomedicine, 129: 21–28, 2016.


[41] R. Chitnis, T. Silver, J. B. Tenenbaum, T. Lozano-Perez, and L. P. Kaelbling. Learning neurosymbolic relational transition models for bilevel planning. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4166–4173. IEEE, 2022.


[42] S. Nasiriany, H. Liu, and Y. Zhu. Augmenting reinforcement learning with behavior primitives for diverse manipulation tasks. In 2022 International Conference on Robotics and Automation (ICRA), pages 7477–7484. IEEE, 2022.


[43] Y. Zhu, J. Tremblay, S. Birchfield, and Y. Zhu. Hierarchical planning for long-horizon manipulation with geometric and symbolic scene graphs. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 6541–6548. IEEE, 2021.


[44] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipulation. In Conference on Robot Learning, pages 894–906. PMLR, 2022.


[45] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for robotic manipulation. arXiv preprint arXiv:2209.05451, 2022.


[46] W. Liu, C. Paxton, T. Hermans, and D. Fox. Structformer: Learning spatial structure for language-guided semantic rearrangement of novel objects. In 2022 International Conference on Robotics and Automation (ICRA), pages 6322–6329. IEEE, 2022.


[47] D. Xu, A. Mandlekar, R. Mart´ın-Mart´ın, Y. Zhu, S. Savarese, and L. Fei-Fei. Deep affordance foresight: Planning through what can be done in the future. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 6206–6213. IEEE, 2021.


[48] C. Wang, D. Xu, and L. Fei-Fei. Generalizable task planning through representation pretraining. IEEE Robotics and Automation Letters, 7(3):8299–8306, 2022.


[49] S. Cheng and D. Xu. Guided skill learning and abstraction for long-horizon manipulation. arXiv preprint arXiv:2210.12631, 2022.


[50] C. Agia, T. Migimatsu, J. Wu, and J. Bohg. Taps: Task-agnostic policy sequencing. arXiv preprint arXiv:2210.12250, 2022.


[51] C. Li, R. Zhang, J. Wong, C. Gokmen, S. Srivastava, R. Mart´ın-Mart´ın, C. Wang, G. Levine, M. Lingelbach, J. Sun, et al. Behavior-1k: A benchmark for embodied ai with 1,000 everyday activities and realistic simulation. In 6th Annual Conference on Robot Learning, 2022.


[52] A. Hiranaka, M. Hwang, S. Lee, C. Wang, L. Fei-Fei, J. Wu, and R. Zhang. Primitive skillbased robot learning from human evaluative feedback. arXiv preprint arXiv:2307.15801, 2023.


[53] O. Khatib. A unified approach for motion and force control of robot manipulators: The operational space formulation. IEEE Journal on Robotics and Automation, 3(1):43–53, 1987.


[54] Y. Zhu, A. Joshi, P. Stone, and Y. Zhu. Viola: Object-centric imitation learning for vision-based robot manipulation. In Conference on Robot Learning, pages 1199–1210. PMLR, 2023.


[55] D. Coleman, I. Sucan, S. Chitta, and N. Correll. Reducing the barrier to entry of complex robotic software: a moveit! case study, 2014.


[56] E. Mansimov and K. Cho. Simple nearest neighbor policy method for continuous control tasks, 2018. URL https://openreview.net/forum?id=ByL48G-AW.


[57] S. Nasiriany, T. Gao, A. Mandlekar, and Y. Zhu. Learning and retrieval from prior data for skill-based imitation learning. In 6th Annual Conference on Robot Learning, 2022.


[58] M. Du, S. Nair, D. Sadigh, and C. Finn. Behavior retrieval: Few-shot imitation learning by querying unlabeled datasets, 2023.


[59] S. Nair, A. Rajeswaran, V. Kumar, C. Finn, and A. Gupta. R3m: A universal visual representation for robot manipulation. In 6th Annual Conference on Robot Learning, 2021.


[60] K. Q. Weinberger and L. K. Saul. Distance metric learning for large margin nearest neighbor classification. Journal of machine learning research, 10(2), 2009.


[61] M. Oquab, T. Darcet, T. Moutakanni, H. Vo, M. Szafraniec, V. Khalidov, P. Fernandez, D. Haziza, F. Massa, A. El-Nouby, M. Assran, N. Ballas, W. Galuba, R. Howes, P.-Y. Huang, S.-W. Li, I. Misra, M. Rabbat, V. Sharma, G. Synnaeve, H. Xu, H. Jegou, J. Mairal, P. Labatut, A. Joulin, and P. Bojanowski. Dinov2: Learning robust visual features without supervision, 2023.


[62] T. Luddecke and F. Worgotter. Learning to segment affordances. In 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), pages 769–776. IEEE, 2017.


[63] A. Nguyen, D. Kanoulas, D. G. Caldwell, and N. G. Tsagarakis. Object-based affordances detection with convolutional neural networks and dense conditional random fields. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5908– 5915. IEEE, 2017.


[64] A. Nguyen, D. Kanoulas, D. G. Caldwell, and N. G. Tsagarakis. Translating videos to commands for robotic manipulation with deep recurrent neural networks. In IEEE International Conference on Robotics and Automation (ICRA), pages 3782–3788. IEEE, 2018.


[65] T. Mar, V. Tikhanoff, G. Metta, and L. Natale. Self-supervised learning of grasp dependent tool affordances on the icub humanoid robot. In IEEE International Conference on Robotics and Automation (ICRA), pages 3200–3206. IEEE, 2015.


[66] T. Mar, V. Tikhanoff, G. Metta, and L. Natale. Self-supervised learning of tool affordances from 3d tool representation through parallel som mapping. In IEEE International Conference on Robotics and Automation (ICRA), pages 894–901. IEEE, 2017.


[67] H. Luo, W. Zhai, J. Zhang, Y. Cao, and D. Tao. One-shot affordance detection. In Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI), 2021.


[68] D. Hadjivelichkov, S. Zwane, M. P. Deisenroth, L. Agapito, and D. Kanoulas. One-shot transfer of affordance regions? affcorrs! In 6th Conference on Robot Learning (CoRL), 2022.


[69] C. Li, R. Zhang, J. Wong, C. Gokmen, S. Srivastava, R. Mart´ın-Mart´ın, C. Wang, G. Levine, M. Lingelbach, J. Sun, et al. Behavior-1k: A benchmark for embodied ai with 1,000 everyday activities and realistic simulation. In Conference on Robot Learning, pages 80–93. PMLR, 2023.


[70] S. Katz. Assessing self-maintenance: activities of daily living, mobility, and instrumental activities of daily living. Journal of the American Geriatrics Society, 1983.


[71] S. Srivastava, C. Li, M. Lingelbach, R. Mart´ın-Mart´ın, F. Xia, K. E. Vainio, Z. Lian, C. Gokmen, S. Buch, K. Liu, et al. Behavior: Benchmark for everyday household activities in virtual, interactive, and ecological environments. In Conference on Robot Learning, pages 477–490. PMLR, 2022.


[72] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.


[73] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, P. Dollar, and R. Girshick. Segment anything. ´ arXiv:2304.02643, 2023.


[74] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, A. Y. Ng, et al. Ros: an open-source robot operating system. In ICRA Workshop on Open Source Software, volume 3, page 5. Kobe, Japan, 2009.


[75] B. Blankertz, K.-R. Muller, G. Curio, T. M. Vaughan, G. Schalk, J. R. Wolpaw, A. Schlogl, C. Neuper, G. Pfurtscheller, T. Hinterberger, et al. The bci competition 2003: progress and perspectives in detection and discrimination of eeg single trials. IEEE transactions on biomedical engineering, 51(6):1044–1051, 2004.


[76] R. Scherer and C. Vidaurre. Motor imagery based brain–computer interfaces. In Smart Wheelchairs and Brain-Computer Interfaces, pages 171–195. Elsevier, 2018.


[77] A. Ravi, N. H. Beni, J. Manuel, and N. Jiang. Comparing user-dependent and user-independent training of cnn for ssvep bci. In Journal of Neural Engineering, 2020.


[78] A. Gramfort, M. Luessi, E. Larson, D. A. Engemann, D. Strohmeier, C. Brodbeck, R. Goj, M. Jas, T. Brooks, L. Parkkonen, et al. Meg and eeg data analysis with mne-python. Frontiers in neuroscience, page 267, 2013.


[79] H. Ramoser, J. Muller-Gerking, and G. Pfurtscheller. Optimal spatial filtering of single trial eeg during imagined hand movement. IEEE transactions on rehabilitation engineering, 8(4): 441–446, 2000.


[80] K. Q. Weinberger and L. K. Saul. Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research, 10(9):207–244, 2009. URL http: //jmlr.org/papers/v10/weinberger09a.html.


[81] L. van der Maaten and G. Hinton. Viualizing data using t-sne. Journal of Machine Learning Research, 9:2579–2605, 11 2008.


This paper is available on arxiv under CC 4.0 license.