paint-brush
Understanding Brain-Robot Interfaces (BRIs)by@escholar

Understanding Brain-Robot Interfaces (BRIs)

tldt arrow

Too Long; Didn't Read

This section provides insights into brain-robot interfaces (BRI), focusing on EEG-based signal decoding methods such as SSVEP and motor imagery. Discover how these signals enable versatile human-robot interaction, supported by advancements in adaptive algorithms, propelling the development of intelligent robotics and efficient collaboration.

People Mentioned

Mention Thumbnail
featured image - Understanding Brain-Robot Interfaces (BRIs)
EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture

Authors:

(1) Ruohan Zhang, Department of Computer Science, Stanford University, Institute for Human-Centered AI (HAI), Stanford University & Equally contributed; [email protected];

(2) Sharon Lee, Department of Computer Science, Stanford University & Equally contributed; [email protected];

(3) Minjune Hwang, Department of Computer Science, Stanford University & Equally contributed; [email protected];

(4) Ayano Hiranaka, Department of Mechanical Engineering, Stanford University & Equally contributed; [email protected];

(5) Chen Wang, Department of Computer Science, Stanford University;

(6) Wensi Ai, Department of Computer Science, Stanford University;

(7) Jin Jie Ryan Tan, Department of Computer Science, Stanford University;

(8) Shreya Gupta, Department of Computer Science, Stanford University;

(9) Yilun Hao, Department of Computer Science, Stanford University;

(10) Ruohan Gao, Department of Computer Science, Stanford University;

(11) Anthony Norcia, Department of Psychology, Stanford University

(12) Li Fei-Fei, 1Department of Computer Science, Stanford University & Institute for Human-Centered AI (HAI), Stanford University;

(13) Jiajun Wu, Department of Computer Science, Stanford University & Institute for Human-Centered AI (HAI), Stanford University.

Abstract & Introduction

Brain-Robot Interface (BRI): Background

The NOIR System

Experiments

Results

Conclusion, Limitations, and Ethical Concerns

Acknowledgments & References

Appendix 1: Questions and Answers about NOIR

Appendix 2: Comparison between Different Brain Recording Devices

Appendix 3: System Setup

Appendix 4: Task Definitions

Appendix 5: Experimental Procedure

Appendix 6: Decoding Algorithms Details

Appendix 7: Robot Learning Algorithm Details

2 Brain-Robot Interface (BRI): Background

Since Hans Berger’s discovery of EEG in 1924, several types of devices have been developed to record human brain signals. We chose non-invasive, saline-based EEG due to its cost and accessibility to the general population, signal-to-noise ratio, temporal and spatial resolutions, and types of signals that can be decoded (see Appendix 2). EEG captures the spontaneous electrical activity of the brain using electrodes placed on the scalp. EEG-based BRI has been applied to prosthetics, wheelchairs, as well as navigation and manipulation robots. For comprehensive reviews, see [22– 25]. We utilize two types of EEG signals that are frequently employed in BRI, namely, steady-state visually evoked potential (SSVEP) and motor imagery (MI).


SSVEP is the brain’s exogenous response to periodic external visual stimulus [26], wherein the brain generates periodic electrical activity at the same frequency as flickering visual stimulus. The application of SSVEP in assistive robotics often involves the usage of flickering LED lights physically affixed to different objects [27, 28]. Attending to an object (and its attached LED light) will increase the EEG response at that stimulus frequency, allowing the object’s identity to be inferred. Inspired by prior work [15], our system utilizes computer vision techniques to detect and segment objects, attach virtual flickering masks to each object, and display them to the participants for selection.


Figure 2: NOIR has two components, a modular pipeline for decoding goals from human brain signals, and a robotic system with a library of primitive skills. The robots possess the ability to learn to predict human intended goals hence reducing the human effort required for decoding.


Motor Imagery (MI) differs from SSVEP due to its endogenous nature, requiring individuals to mentally simulate specific actions, such as imagining oneself manipulating an object. The decoded signals can be used to indicate a human’s intended way of interacting with the object. This approach is widely used for rehabilitation, and for navigation tasks [29] in BRI systems. This approach often suffers from low decoding accuracy [22].


Much existing BRI research focuses on the fundamental problem of brain signal decoding, while several existing studies focus on how to make robots more intelligent and adaptive [13–17, 30]. Inspired by this line of work, we leverage few-shot policy learning algorithms to enable robots to learn human preferences and goals. This minimizes the necessity for extensive brain signal decoding, thereby streamlining the interaction process and enhancing overall efficiency.


Our study is grounded in substantial advancements in both the field of brain signal decoding and robot learning. Currently, many existing BRI systems target only one or a few specific tasks. To the best of our knowledge, no previous work has presented an intelligent, versatile system capable of successfully executing a wide range of complex tasks, as demonstrated in our study.


This paper is available on arxiv under CC 4.0 license.