paint-brush
EEG Controlled Rover – A Brain-Computer Interfaceby@prajwalgatti
8,479 reads
8,479 reads

EEG Controlled Rover – A Brain-Computer Interface

by Prajwal GattiAugust 15th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<em>Here’s how we developed a Brain-Computer Interface(BCI) implementation to control a rover with just your thoughts, i.e., with a raw stream of real-time EEG data.</em>

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - EEG Controlled Rover – A Brain-Computer Interface
Prajwal Gatti HackerNoon profile picture

Here’s how we developed a Brain-Computer Interface(BCI) implementation to control a rover with just your thoughts, i.e., with a raw stream of real-time EEG data.

A BCI is a device that translates neuronal information from the cerebral area of the brain into data capable of controlling external software or hardware such as a computer or a robotic limb. BCIs are often used as assisted living devices for individuals with motor or sensory impairments.

BCIs are of two classes: Invasive methods and Non-Invasive methods. Invasive BCIs are electrodes that are implanted either inside the subject’s brain or on the surface of the brain through surgical means, for reading high quality signals, whereas generally Non-Invasive BCIs are electrodes that are worn over the surface of the head and placed along the scalp, to read feeble EEG signals and thus do not require surgery.

EEG or (electroencephalogram signals) measures voltage fluctuations resulting from ionic current within the neurons of the brain.

Non-invasive EEG-based BCIs are portable, easy to use, and able to acquire signals in real-time, which is feasible to achieve brain-actuated control.

For this project, we use a commercially available EEG-based BCI device called the Emotiv Insight. It provides 5-channels (i.e., 5 electrodes to pick up signals from the surface of head) which is sufficient for this build.

Emotiv Insight, a portable EEG reader

The Emotiv Insight connects to the computer via bluetooth, and is provided with a native API to access the EEG signals being read from the subject’s brain.

5 channels of EEG waves read from the test subject

Using the API we can train the Insight to correspond or map the EEG data being read to the labels such as ‘Move Forward’, ‘Turn Left’ and ‘Turn Right’. For example, when training the Insight to learn the command ‘Move Forward’, we think about accelerating the rover, so that it corresponds EEG signals being produced at the time as the label ‘Move Forward’.

Once the training for these mental commands is done, the device learns to recognise these commands when we are thinking about them.

System Architecture

We are going to use a very straightforward architecture, which is comprised of the four components described below.

The EEG device which is the Emotiv Insight. It picks up the EEG signals from the subject and sends them to the connected computer as raw vectorised data.

A computer to connect with the Emotiv Insight via bluetooth and communicate through the Emotiv’s API. The API (which is built on JSON and WebSockets) provides access to EEG data and also supports training and recognition of mental commands. The documentation for the API can be obtained from the Emotiv Documentation page. The final code is provided and explained below, so we can skip documentation for now.

A Server that listens to requests (i.e., commands such as ‘move forward’, ‘turn left’, etc.) from the computer and sends the corresponding signals to the rover, where these requests are acted upon. The actions are the movement commands for the rover. It is written in using the Flask web framework, and hosted in the rover’s Raspberry Pi computer.

Raspberry Pi 3

The Raspberry Pi not only makes for an extremely cheap Linux computer, it also makes for a excellent bridge between the Python programming language and robotics.

A Rover, which is a very simple build comprising of a metallic body, four wheels, four electric motors, an H-bridge, portable phone battery, and a Raspberry Pi 3 computer. The main focus here is on the Raspberry Pi and the server code that goes in it. Each wheel is connected to its own motor and all the four motors are connected to the H-bridge. The H-bridge makes connections to the GPIO pins of the Pi which are controlled via the server by making requests by any remote device.

Top and bottom view of the rover

Talk is cheap. Show me the code.

The codes below are available in a GitHub repo as well: [prajwalgatti/EEG_Controlled_Rover]

Server Code

This Python 3 script makes for the server code which is hosted on the Rover’s Raspberry Pi. It is meant to listen for, and receive commands to control the rover.



from flask import Flaskimport RPi.GPIO as gpioimport time

Let us go into detail about these libraries being imported.

In line 1, we import Flask. Flask is a web framework written in Python.

In line 2, we import RPi.GPIO. This package provides a class to control the GPIO (general-purpose input/output) channels on the Raspberry Pi.

In line 3, we import the time library, used here to add delays during the execution of the code.


app = Flask(__name__)tf=0.8






def init():gpio.setmode(gpio.BOARD)gpio.setup(7, gpio.OUT)gpio.setup(11, gpio.OUT)gpio.setup(13, gpio.OUT)gpio.setup(15, gpio.OUT)

We create app, an instance of class Flask.

Next, we have an init() function, which stands for initialisation. The init function sets up all of the GPIO pins that we are using.











# Move rover [email protected](‘/forward’)def forward():init()gpio.output(7, False)gpio.output(11, True)gpio.output(13, True)gpio.output(15, False)time.sleep(tf)gpio.cleanup()return ‘moved forward’











# Pivot rover to the [email protected](‘/pivot_left’)def pivot_left():init()gpio.output(7, True)gpio.output(11, False)gpio.output(13, True)gpio.output(15, False)time.sleep(tf)gpio.cleanup()return ‘pivoted left’











# Pivot rover to the [email protected](‘/pivot_right’)def pivot_right():init()gpio.output(7, False)gpio.output(11, True)gpio.output(13, False)gpio.output(15, True)time.sleep(tf)gpio.cleanup()return ‘pivoted right’

Next, we define our forward, pivot left and pivot right functions. What these functions are going to do is initiate the proper changes in the GPIO pins to make the wheels go either forward, left or right, depending on our goals.

Finally, after defining our functions, we call a quick init(), then ask the forward and reverse functions to run for 0.8 seconds. After this, we use gpio.cleanup() to cease the pins from being activated.

We use the route() decorator to tell Flask what URL should trigger our function.


if __name__ == ‘__main__’:app.run(debug=True, host=’0.0.0.0')

If you run the script you will notice that the server is only accessible from your own computer, not from any other device in the network. This is the default because in debugging mode a user of the application can execute arbitrary Python code on your computer.

If you have debug disabled or trust the users on your network, you can make the server publicly available simply by changing the call of the run() method to look like the above line.

Make sure that your HTTP server is listening on 192.168.0.8:5000 (in my case) or everywhere (0.0.0.0:5000).

EEG Control Code

This is written in Python 3 and uses several libraries to communicate with the Emotiv’s API. This code is run on the host computer which has the API access. Let us import all the required packages and libraries to establish a connection with the Insight.

# Importing the packages and the libraries






from credentials import *import jsonfrom websocket import create_connectionimport sslimport timeimport requests

Let us see what each of the above packages are needed for.

In line 1, we’ve imported all the contents of credentials.py which is just another python file in which we store all the required credentials for the API access (_auth, client_secret, and client_id).

In line 2, we’ve imported the json library. The API uses JSON to make requests and get back the results, so we require the functions of json library to handle data being received and sent to the API.

In line 3, we’ve imported the function to create a web socket connection from the websocket library. WebSockets provide a real-time connection to the underlying API service, designed to be easy to use in both desktop and web-based applications.

In line 4, we’ve imported the ssl library or “Secure Socket Layer” library. This module provides access to Transport Layer Security (often known as “ssl”) encryption and peer authentication facilities for network sockets, both client-side and server-side.

In line 5, we’ve imported the time library, used in this script to add time delays during the execution of the code.

In line 6, we’ve imported the requests library. Requests allows you to send HTTP requests easily from the python script.

Let’s create an object of the create_connection class below.

ws = create_connection(“wss://emotivcortex.com:54321”, sslopt={“cert_reqs”: ssl.CERT_NONE})

The Cortex API service listens on port 54321. This means we can connect to it using the url wss://emotivcortex.com:54321.

Now, let’s open an active session with the Emotiv.










ws.send(json.dumps({“jsonrpc”: “2.0”,“method”: “createSession”,“params”: {“_auth”: _auth,“headset”:”INSIGHT-5A688F16",“status”: “open”},“id”: 1}))

print(ws.recv())

We are sending data to the Emotiv’s websocket server using ws.send() and receiving data from it using ws.recv() function. The data being sent here is a JSON request to the API, asking it to create a session using the [createSession](https://emotiv.github.io/cortex-docs/#createsession) method. To send it in the string format, we use json.dumps(). The request and its content isn’t really something we need to break our heads over, it’s just a piece of code we get from the API’s documentation page for creating a session, that’s all. The data received back from the server should be printed on to the terminal.

Furthermore, we need to subscribe to the system stream.











ws.send(json.dumps({“jsonrpc”: “2.0”,“method”: “subscribe”,“params”: {“_auth”:_auth,“streams”:[“sys”]},“id”: 1}))

print(ws.recv())

To actually receive data from a session, you have to “subscribe” to it. This is done using the [subscribe](https://emotiv.github.io/cortex-docs/#subscribe) method. Along with the session you want to subscribe to, you have to specify which streams you are interested in. We use sys stream here to set up training for the mental commands.

Let’s begin training!











ws.send(json.dumps( {“jsonrpc”: “2.0”,“method”: “training”,“params”: {“_auth”:_auth,“detection”:”mentalCommand”,“action”:”neutral”,“status”:”start”},“id”: 1}))





print(ws.recv())time.sleep(5)print(ws.recv())time.sleep(10)print(ws.recv())

Here we start training the mental commands, beginning with the “neutral” state. Training is initiated by calling the method training and specifying the action as neutral and status as start. We should receive a response specifying "Set up training successfully for action neutral with status start", followed by "MC_Started" and then "MC_Succeeded" a few seconds later. The delay in these responses are handled using the time.sleep() function.

Upon receiving the MC_Started response The subject wearing the Emotiv Insight should stay very still, imagining the neutral command so the device registers it correctly. MC_Succeeded tells us that the training for neutral command has been successful.












ws.send(json.dumps( {“jsonrpc”: “2.0”,“method”: “training”,“params”: {“_auth”:_auth,“detection”:”mentalCommand”,“action”:”neutral”,“status”:”accept”},“id”: 1}))



print(ws.recv())time.sleep(2)print(ws.recv())

If we’re satisfied with the data we’ve trained, we need to send a request to accept it as the neutral command. We’ve successfully trained the neutral mental command.

We repeat the same steps as above for the rest of the mental commands: push(move rover forward), left(rotate rover left), and right(rotate rover right), by changing the action parameter each time, as shown below.











ws.send(json.dumps( {“jsonrpc”: “2.0”,“method”: “training”,“params”: {“_auth”:_auth,“detection”:”mentalCommand”,“action”:”push”,“status”:”start”},“id”: 1}))





print(ws.recv())time.sleep(5)print(ws.recv())time.sleep(10)print(ws.recv())












ws.send(json.dumps( {“jsonrpc”: “2.0”,“method”: “training”,“params”: {“_auth”:_auth,“detection”:”mentalCommand”,“action”:”push”,“status”:”accept”},“id”: 1}))



print(ws.recv())time.sleep(2)print(ws.recv())

Training for push. Subject imagines the thought ‘move rover forward’ during training.











ws.send(json.dumps( {“jsonrpc”: “2.0”,“method”: “training”,“params”: {“_auth”:_auth,“detection”:”mentalCommand”,“action”:”left”,“status”:”start”},“id”: 1}))





print(ws.recv())time.sleep(5)print(ws.recv())time.sleep(10)print(ws.recv())












ws.send(json.dumps( {“jsonrpc”: “2.0”,“method”: “training”,“params”: {“_auth”:_auth,“detection”:”mentalCommand”,“action”:”left”,“status”:”accept”},“id”: 1}))



print(ws.recv())time.sleep(2)print(ws.recv())

Training for left. Subject imagines the thought ‘rotate rover left’ during training.











ws.send(json.dumps( {“jsonrpc”: “2.0”,“method”: “training”,“params”: {“_auth”:_auth,“detection”:”mentalCommand”,“action”:”right”,“status”:”start”},“id”: 1}))





print(ws.recv())time.sleep(5)print(ws.recv())time.sleep(10)print(ws.recv())












ws.send(json.dumps( {“jsonrpc”: “2.0”,“method”: “training”,“params”: {“_auth”:_auth,“detection”:”mentalCommand”,“action”:”right”,“status”:”accept”},“id”: 1}))



print(ws.recv())time.sleep(2)print(ws.recv())

Training for right. Subject imagines the thought ‘rotate rover right’ during training.

And just like that, we’ve finished the training processes for the required mental commands. Now the script should be able to tell you which of the four trained commands you’re thinking about.











ws.send(json.dumps({“jsonrpc”: “2.0”,“method”: “subscribe”,“params”: {“_auth”:_auth,“streams”:[“com”]},“id”: 1}))

print(ws.recv())

To obtain the stream of commands we call the method subscribe and subscribe to the com stream, which allows for a stream of mental commands to be read from the headset. While subscribed, you will receive events asynchronously as data comes in.

A single data received from the stream would look like this:

{"com":["push",0.673717498779297],"sid":"46d18597-7034-40ab-9d6e-d617a89a24ce","time":245.356536865234}

So parsing the data received from the stream is imperative for a clean, readable output.

while True:


thought = json.loads(ws.recv())[“com”][0]print(thought)









if(thought == “push”):url_get = “http://192.168.0.8:5000/forward"res = requests.get(url_get)elif(thought == “left”):url_get = “http://192.168.0.8:5000/pivot_left"res = requests.get(url_get)elif(thought == “right”):url_get = “http://192.168.0.8:5000/pivot_right"res = requests.get(url_get)

To receive the stream of commands being read, we loop the ws.recv() function. Now to relay the commands being read to the server hosted on the Raspberry Pi (which controls the rover), we make requests to the corresponding URLs.

The URLs when requested, call the corresponding functions in the server code for the commands and hence activate the necessary GPIO pins in the Raspberry PI to facilitate the movement of the rover.

Setup of the project

Further Reading & More Interesting Things

If you have any thoughts, or questions, comment below or send me an email: [email protected], or tweet me @prajwal_gatti.

This post and project was co-developed with Venkatesh Tata, mail him at [email protected].

Suggest something or create an issue at [prajwalgatti/EEG_Controlled_Rover]