paint-brush
From Amputee to Cyborg with this AI-Powered Hand 🦾by@whatsai
217 reads

From Amputee to Cyborg with this AI-Powered Hand 🦾

by Louis BouchardApril 12th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Researchers used AI models based on the current neural networks (RNN) to read and accurately decode the amputee’s intent of moving individual fingers from peripheral nerve activities. The AI models are deployed on an NVIDIA Jetson Nano as a portable, self-contained unit. With this AI-powered nerve interface, the amputede can control a neuroprosthetic hand with life-like dexterity and intuitiveness. i think it is one of the most exciting applications of the current state of transformers and can change the lives of many people.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - From Amputee to Cyborg with this AI-Powered Hand 🦾
Louis Bouchard HackerNoon profile picture

Researchers used AI models based on the current neural networks (RNN) to read and accurately decode the amputee’s intent of moving individual fingers from peripheral nerve activities.

The AI models are deployed on an NVIDIA Jetson Nano as a portable, self-contained unit. With this AI-powered nerve interface, the amputee can control a neuroprosthetic hand with life-like dexterity and intuitiveness.

Watch the video

►Subscribe to my newsletter: http://eepurl.com/huGLT5

References

[1] Nguyen & Drealan et al. (2021) A Portable, Self-Contained Neuroprosthetic Hand with Deep Learning-Based Finger Control: https://arxiv.org/abs/2103.13452​

[2]. Luu & Nguyen et al. (2021) Deep Learning-Based Approaches for Decoding Motor Intent from Peripheral Nerve Signals: https://www.researchgate.net/publication/349448928_Deep_Learning-Based_Approaches_for_Decoding_Motor_Intent_from_Peripheral_Nerve_Signals

[3]. Nguyen et al. (2021) Redundant Crossfire: A Technique to Achieve Super-Resolution in Neurostimulator Design by Exploiting Transistor Mismatch: https://experts.umn.edu/en/publications/redundant-crossfire-a-technique-to-achieve-super-resolution-in-ne​ 

[4]. Nguyen & Xu et al. (2020) A Bioelectric Neural Interface Towards Intuitive Prosthetic Control for Amputees: https://www.biorxiv.org/content/10.1101/2020.09.17.301663v1.full

Video Transcript

00:00

in this video i will talk about a

00:02

randomly picked application of

00:03

transformers from the 600 new papers

00:06

published this week

00:07

adding nothing much to the field but

00:09

improving the accuracy by 0.1 percent on

00:11

one benchmark by tweaking some

00:13

parameters

00:14

i hope you are not too excited about

00:16

this introduction because that was just

00:18

to mess with the transformers recent

00:19

popularity

00:20

of course they are awesome and super

00:22

useful in many cases

00:23

and most researchers are focusing on

00:25

them but other things exist in ai

00:28

that are as exciting if not more you can

00:31

be sure

00:31

will cover exciting advancements of the

00:33

transformers architecture applied to nlp

00:36

computer vision or other fields

00:38

as i think it is very promising but

00:40

covering these new papers making slight

00:42

modifications to them is not as

00:44

interesting to me

00:45

just as an example here are a couple of

00:47

papers shared in march

00:49

applying transformers to image

00:51

classification and since they are all

00:53

quite similar and i already covered one

00:55

of them

00:56

i think it is enough to have an overview

00:58

of the current state of transformers

00:59

in computer vision now let's enter the

01:02

real subject

01:03

of this video which is nothing related

01:05

to transformers or even

01:06

gans in that case no hot words at all

01:09

except maybe

01:10

cyberpunk and yet it's one of the

01:12

coolest applications of ai

01:14

i've seen in a while it attacks a real

01:16

world problem and can change the lives

01:18

of many people

01:19

of course it's less glamour than

01:21

changing your face into an anime

01:23

character

01:23

or a cartoon but it's much more useful i

01:26

present you the portable

01:28

self-contained neuroprosthetic hand with

01:30

deep learning based

01:31

finger control by anguian drilan ital

01:34

before diving into it i just wanted to

01:36

remind you of the free nvidia gtc event

01:39

happening

01:40

next week with many exciting news

01:42

related to ai

01:43

and the deep learning institute giveaway

01:45

i am running if you subscribe to my

01:46

newsletter if you are interested i

01:48

talked about this giveaway with much

01:50

more details in my previous video

01:52

also i just wanted to announce that from

01:54

now on all new youtube members will have

01:57

a specific role on my discord channel as

01:59

a thank you for your support

02:00

now let's jump right into this unique

02:03

and amazing paper

02:04

this new paper applies deep learning to

02:06

a neuroprosthetic hand to allow

02:08

real-time control of individual finger

02:11

movements

02:11

all done directly within the arm itself

02:14

with

02:15

as little as 50 to 120 milliseconds of

02:18

latency

02:19

and up to 99 accuracy an arm amputee who

02:22

has lost his hand for 14 years can move

02:24

its cyborg fingers

02:26

just like a normal hand this work shows

02:28

that the deployment of deep neural

02:30

network applications embedded directly

02:32

on wearable biomedical devices is first

02:35

possible

02:36

but also extremely powerful here deep

02:38

learning is used to process and decode

02:40

nerve data acquired from the amputee to

02:43

obtain dexterous finger movements

02:45

the problem here is that in order to be

02:47

low latency this deep learning model

02:50

has to be on a portable device with much

02:52

lower computational power

02:53

than our gpus fortunately there has been

02:56

recent development of compact hardware

02:58

for deep learning users to fix this

03:00

issue

03:01

in this case they use the nvidia jetson

03:03

nano module

03:04

specifically designed to deploy ai in

03:07

autonomous applications

03:08

it allowed the use of gpus and powerful

03:10

libraries like tensorflow and pytorch

03:13

inside the arm itself

03:14

as they state this offers the most

03:16

appropriate trade-off

03:18

among size power and performance for our

03:20

neural decoder implementation

03:22

which was the goal of this paper address

03:24

the challenge of

03:26

efficiently deploying deep learning

03:28

neural decoders

03:29

on a portable device using real-life

03:31

applications

03:32

towards long-term clinical uses of

03:35

course there are a lot of technical

03:36

details that i will not enter into

03:38

as i am not an expert like how the nerve

03:40

fibers

03:41

and bioelectronics connect together the

03:44

microchip's designs that allows the

03:46

simultaneous neural recording

03:48

and stimulation or the implementation of

03:50

software and hardware

03:51

to support this real-time motor decoding

03:54

system you can read a great explanation

03:56

of these in their papers

03:58

if you'd like to learn more about it

03:59

they are all linked in the description

04:01

of the video

04:02

but let's dive a little more into the

04:04

deep learning side of this insane

04:06

creation

04:07

here their innovation leaned to

04:09

optimizing the deep learning motor

04:10

decoding to reduce as much as possible

04:13

the computational complexity

04:15

into this jetson nano platform this

04:17

image shows

04:18

an overview of the data processing flow

04:20

on the gesture nano

04:21

at first the data in the form of

04:23

peripheral nerve signals

04:25

from the amputee's arm is sent into the

04:28

platform

04:29

then it is pre-processed this step is

04:31

crucial to cut

04:32

raw input neural data into trials and

04:35

extract their main features

04:36

in the temporal domain before feeding to

04:39

the models

04:40

this preprocessed data correspond to the

04:42

main features

04:43

of one second of past neural data from

04:46

the amputee

04:47

cleaned from all noise sources then

04:50

this process data is sent into the deep

04:52

learning model

04:53

to have a final output controlling each

04:55

finger's movement

04:56

note that there are five outputs one for

04:58

each finger

04:59

to quickly go over the model they used

05:01

as you can see it starts with a

05:03

convolutional layer

05:05

this is used to identify different

05:06

representations of data input

05:09

in this case you can see the 64 meaning

05:11

that there are

05:12

64 convolutions made using different

05:15

filters

05:15

so 64 different representations these

05:18

filters are the network parameters

05:20

learned during training to correctly

05:22

control the hand when finally deployed

05:25

then we know that time is very important

05:27

in this case since we want fluid

05:28

movements of the fingers

05:30

so they opt for gated recurrent units or

05:33

gru to represent this time dependency

05:35

aspect when decoding the data

05:37

grews will allow the model to understand

05:40

what the hand was doing in the past

05:41

second

05:42

what is first encoded and what it needs

05:44

to do next

05:45

what is decoded to stay simple gru's are

05:49

just an improved

05:50

version of recurrent neural networks or

05:52

rnns solving computational problems

05:54

rnns had with long inputs by adding

05:57

gates to keep only the relevant

05:59

information

06:00

of past inputs in the recurrent process

06:02

instead of washing out

06:04

the new input every single time it's

06:06

basically allowing the network to decide

06:08

what information should be passed to the

06:10

output

06:11

as in recurrent neural networks the one

06:13

second data here

06:14

in the form of a 512 features is

06:17

processed iteratively

06:19

using the repeated gru blocks each dru

06:22

block

06:22

receives the input at the current step

06:25

and the previous output to produce the

06:27

following output

06:28

we can see gru's as an optimization of

06:30

the basic recurrent neural network

06:32

architecture finally this decoded

06:35

information is sent to linear layers

06:37

basically just propagating the

06:38

information and condensing it

06:40

into probabilities for each individual

06:42

finger

06:43

they studied many different

06:44

architectures as you can read in their

06:46

paper

06:47

but this is the most computationally

06:49

effective model they could make

06:50

yielding incredible accuracy of over 95

06:53

percent for the movement of the fingers

06:56

now that we have a good idea of how the

06:57

model works and know that it's accurate

07:00

some questions are still left such as

07:02

what does the person using it

07:04

feels about it does it feel real does it

07:06

work

07:07

etc in short is this similar to a real

07:10

arm

07:10

as the patient himself said i feel like

07:13

once this thing is fine tuned as

07:15

finished products that are out there

07:17

it will be more lifelike functions to be

07:19

able to do everyday tasks

07:21

without thinking of what positions the

07:23

hand is in

07:24

or what mode i have the hand programmed

07:26

in it's just like if i want to reach and

07:28

pick up something

07:29

i just reach and pick up something

07:31

knowing that it's just like my able hand

07:34

for every functions i think we will get

07:36

there i really do

07:38

please just take one more minute of your

07:40

time to watch this short

07:41

touching video where the amputee uses

07:44

the hand

07:44

and shares his honest feedback

07:50

is it pleasurable playing with it oh

07:52

yeah

07:53

it's just really cool like this is this

07:58

is crazy cool

08:00

to me these are the most incredible

08:02

applications that we can work on

08:04

with ai it directly helps real people

08:07

improve their lives quality

08:09

and there's nothing better than that i

08:11

hope you enjoyed watching this video

08:13

and don't forget to subscribe to the

08:14

channel to stay up to date with

08:16

artificial intelligence news

08:18

thank you for watching and as he just

08:20

said in the video

08:21

i will say the same about ai in general

08:24

this is

08:24

crazy cool