paint-brush
How Emerging Tech Will Revolutionize The Life of Physically Challenged Peopleby@Rejolut@123
261 reads

How Emerging Tech Will Revolutionize The Life of Physically Challenged People

by Amit KumarJanuary 24th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The Age of Exciting Opportunities is changing the way we think, live, and work. Technology, artificial intelligence, machine learning, human-computer interfacing, bio-implants, cybernetics, and bio-prostheses have combined to help – literally - “The blind see, the deaf hear, the lame walk again and the dumb speak’. How do I show you these remarkable achievements? We are going to use four real-life innovations – modern technology, using which the blind are enabled to see.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coins Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How Emerging Tech Will Revolutionize The Life of Physically Challenged People
Amit Kumar HackerNoon profile picture

The Age of Exciting Opportunities

This article will transform the living conditions of millions of disabled people around the world who actually have the financial wherewithal to use these modern technologies.

How Technology is Making the Impossible Possible

“The blind see, the deaf hear, the lame walk again and the dumb speak.”
-Matthew 11:5

No, I’m not getting all-spiritual on you!

What I am saying is that today, technology, artificial intelligence, machine learning, human-computer interfacing, bio-implants, cybernetics, and bio-prostheses have combined to help – literally - “The blind see, the deaf hear, the lame walk and the dumb speak.”

So how do I show you these remarkable achievements?

I thought, let’s go practical.

We are going to use four real-life innovations – modern technology, using which the blind are enabled to see (well, more like audio assistance in most cases), the deaf are given hearing, the lame are given mobility, and the dumb are given speech.
I haven’t been this excited in a very long time!

Let’s get started!

The Blind Will See

There are two sides to this coin. With the current state of the art technology, thee are two ways in which the blind will ‘see’. The first has more to do with actual sight – bionic eyes.

(From https://circuitdigest.com/article/bionic-eye)

A Bionic Eye is a technology fundamentally built around computer vision, human-computer interfacing, neural circuitry implants, and the receptor sensors themselves. Basically, they allow the light waves entering the bionic eye to be processed in a manner similar to the ways natural eyes work themselves.

This is a considerable feat of engineering, but the human eye is such a complex and advanced entity that modelling it is a formidable challenge. However, these bionic eyes have restored basic eyesight to those suffering from illnesses such as retinal pigmentosa and macular degeneration.

The eyesight restored is very primitive. However, it is a massive leap from seeing nothing at all, and technology will only improve this state-of-the-art device as time progresses.

An AI Assistant to the Blind

What if I told you that there was already an app installable on your phone that describes everything your phone camera points to? With the right configuration, this will change everything as far as a blind person is concerned. ‘

A blind person, with this app, that uses image recognition, computer vision, and deep neural networks has, for all purposes, a digital ‘friend’ who can describe everything around them, be it the denomination of currency notes or the expression of a persons face. For more, see below.

EyeSense Is A New AI App That Helps the Blind Discover the World  

ID Labs developed EyeSense, an Artificial Intelligence application that helps blind and visually impaired people discover their surroundings, while also enhancing human-to-human connection.
The AI app is also of great value for individuals with mild cognitive impairment – including memory, language, and thinking skill. EyeSense uses a deep learning-based computer vision to recognize objects and special facial features – like smiles and winks – with no need for an internet connection.
Just point the EyeSense camera towards an object of interest and listen to the app identify it. EyeSense has the ability to easily “learn” new objects and recognize them from many different viewpoints in real-time. This can be done with no internet connection and with an easy training process.
The design philosophy behind EyeSense lies in the ability to personalize each user’s expersience, and to enhance human-to-human interaction.

EyeSense is thus a remarkable technology-assisted enabler for the blind, for two simple reasons

  1. it works offline
  2. it is capable of user personalization.

Human-Computer Interfaces are going to change the ways we think, live, and work. All that you’ve seen till now is just the beginning. A revolution will take place, giving humans computer-like abilities and computers human-like abilities. It’s a very exciting prospect!

Home Automation – the Role of Google Home and Amazon Echo

AI-devices that manage homes like Amazon Echo provide unprecedented access to the entire world for the visually impaired. With Google Home, for example, one can surf the net, listen to songs, send messages, set alarms, start and stop home appliances, access radio channels. And much, much more.

And this home automation task list is just growing bigger and bigger every single minute, every single day. That too, the cheapest of these devices, the Amazon Echo Dot, is just 49 USD! The world is changing. Don’t let a blind person remain ignorant of these technological advances and educate, educate, educate.

The Deaf Will Hear.

Now this has some precedents. Blindness, of course, is the greatest disability. But being deaf is bad as well! How can AI possibly improve the lives of the deaf?

As it turns out, it can. And massively so!

Speech Recognition, Captioning, and Conversion of ASL to Spoken English

Microsoft has partnered with several industry leaders to bring a audio-enabled classroom to several universities in the United States and abroad.

Perhaps Microsoft’s best move was to appoint a wonderful leader and a deaf person herself, Jenny Lay-Flurrie, who has brought in some wonderful innovation to the deaf using speech recognition technology and automatic speech captioning.

This makes it possible for the deaf to hear what people are saying with apps that translate multiple languages to English text that people can read on the phone or their laptops. In fact, the technology is even capable of converting American Sign Language (ASL) to English in real-time!

Bionic Ears

You might say, all right, Thomas. AI translates speech to visual text. But you promised that the deaf would hear. Actually hear!

Yes - I did.

Have a look at one of the latest discoveries in hearing aids – bionic ears.

The quote from http://www.bionicsinstitute.org/stories/bionic-ears-and-how-they-change-the-brain/:

30 years ago, the potential for deaf children to communicate verbally was greatly diminished. Now with bionic ears, and educational support, people with even a profound hearing loss can speak clearly and confidently – many even learn multiple languages.
5-year-old bilateral cochlear implant recipient Alana Brown has a passion for music and dancing. Alana was born profoundly deaf and received her first cochlear implant at 16 months and her second at 22 months. Alana’s family have written a poem about the change this made in their lives
We cried the day that we found out,
she could not hear a whisper even a shout.
Sounds of music and birds that sing
she couldn’t hear a single thing.
Now a miracle has made her hear,
it bought us hope and joy and many more tears.
Now our girl can hear these things,
can dance and move and even sing
Doctors and audiologists observed that children implanted early in life like Alana respond very well to communications training.
This was thought to be the result of brain plasticity – the ability of the brain to re-organize itself by forming new connections between brain cells (neurons) via new experiences.
In the case of the bionic ear, the new experience is sound, from the device stimulating the auditory nerve in the cochlea.
And this is not even examining the many and numerous advances that have taken place in the field of hearing aids assisted technologies.

Is anyone you know deaf? Well –

The only barrier between them and hearing is finance.

Nothing else!

Now you may have a question – how are all these advances taking place? What is the underlying technology behind all these developments? The answer is artificial neural networks or ANNs for short. What is so special about them that enables them to do all these seemingly impossible things? I promise to devote one section to that at the end of this article. For now, let’s move on!

The Lame Will Walk

This is one subject that is – well – easy to analyse. Being lame is one of the most serious disabilities a person can have. There are varying degrees of lameness. But to show how AI can contribute to overcoming this obstacle – well – have a look below!

Bio-Prostheses

How can AI help the lame to walk? The answer in one word – robotics!
Well, to be more accurate – Human-Computer Interfacing through electrical chips and robot appendages wherever the missing limbs may be.

There is even a video of a girl actually playing violin with a robotic arm, I won’t embed it but I am giving the link below just so that you, the reader, will understand exactly the amazing possibilities for someone missing any part of their body to enable normal functioning using technology and AI (the fuzzy logic control system for each limb, leg, arm, hand, foot, etc, used instead of PIDs (Google it), yes, that’s AI too!).

The Miracle Violinist That Had Judges in Tears - Manami Ito’s World’s Best Audition

Prostheses are essentially synthetic duplicates used in place of your missing limb. To connect a limb to the brain, so that the handicapped patient can use it properly, a little computer circuitry is need, as well as a knowledge of the brain.

The wonder between our ears that all human beings on Earth possess is essentially an electromagnetic machine, with really, really, low voltages. Sending signals to and from the brain is possible using intricate bio-electric circuitry. How does this control system work? You guessed it – artificial neural networks (in most cases).

Don’t confuse ANNs with real biological neurons, that equipment interfacing with the brain’s electromagnetic signals is a piece of computer circuitry – an embedded microprocessor. The control system that maps the brains signals to the movements of the prosthesis – which is a pattern recognition problem – is where you will find the ANN.

Brain neurons have computer circuits and chips that are implanted so that the interfacing works correctly.

The Dumb will Speak

Now this is one of the most well-known and well-documented cases where those incapable of speech were given the ability to communicate through computers and technology. We will take one of the most celebrated of the dumb and the paralyzed – the cosmologist Stephen Hawking.

Stephen William Hawking was born in 1942 and was a theoretical physicist. In 1963, he was diagnosed with progressive motor neuron disease (MND; also known as amyotrophic lateral sclerosis "ALS" or Lou Gehrig's disease) that gradually paralysed him over the decades.

Even after the loss of his speech, he was still able to communicate through a speech-generating device, initially through use of a hand-held switch, and eventually by using a single cheek muscle. He died on 14 March 2018 at the age of 76, after living with the disease for more than 50 years.

What is so remarkable about Stephen Hawking was his incredible prolificity despite his illness. Even while completely paralysed and unable to move, let alone talk, he published several books, authored an innumerable number of research papers, the last being published in 2018 just before his death, changed the entire world’s understanding about life and history, and became one of the most iconic of all British celebrities. His most famous book, A Brief History of Time sold over 9 million copies and was translated into several languages.

How did Hawking Communicate?

As the disease spread, Hawking became less mobile and began using a wheelchair, and finally was totally paralysed barring one muscle in his left cheek. A speech-generating device combined with a software program, served as his electronic voice, allowing Hawking to select his words by moving the muscles in his cheek.

Think about it. He supervised 39 PhD students, won practically every single award available for scientists worldwide, with an illness so crippling that he was given a life expectancy of 2 years in 1963. Once again, it was AI to the rescue and a speech generator using cheek gestures (another victory for the artificial neural network construct) that helped him speak, and it was developed at the University of Cambridge.

The Artificial Neural Network Magic

Here, we’re going to take a little detour and deliver the goods on neural networks as promised.

Why are neural nets so popular and so widely used?

The answer lies in an AI science concept called the Universal Approximation theorem.

to quote Wikipedia:

In the mathematical theory of artificial neural networks, the universal approximation theorem states that a feed-forward network with a single hidden layer containing a finite number of neurons can approximate continuous functions on compact subsets of Rn, under mild assumptions on the activation function.
The theorem thus states that simple neural networks can represent a wide variety of interesting functions when given appropriate parameters; however, it does not touch upon the algorithmic learnability of those parameters.
One of the first versions of the theorem was proved by George Cybenko in 1989 for sigmoid activation functions.
Kurt Hornik showed in 1991 that it is not the specific choice of the activation function, but rather the multilayer feedforward architecture itself which gives neural networks the potential of being universal approximators. The output units are always assumed to be linear.

Let me translate that into English for you.

Given enough data, a neural network with at least one hidden layer can map any real number set to any real number set.

What does that mean?

It means that you can ‘approximate’ any process in the world and extrapolate, predict, interpret, interpolate, forecast and simulate any natural process using a sufficient amount of data.

But that opens up the whole world! Every single process is a function of some sort or the other!

A process with inputs and outputs can be viewed as a function.

What do we know about it?

Nothing!

But that’s the beauty of it – we don’t need to know anything about the function!

Given enough input and output data, a neural network can approximate practically anything you want in the entire universe.

To help you understand, here is a woefully incomplete list of the applications of artificial neural networks:

  1. Speech Recognition.
  2. Pattern Recognition
  3. Pattern Detection
  4. Image Processing
  5. Self-Driving Cars
  6. Speech Processing (Stephen Hawking?)
  7. Natural Language Processing
  8. Forecasting
  9. Regression
  10. Classification
  11. Fuzzy Control Systems (artificial leg?)
  12. Cancer Detection
  13. Anomaly Detection
  14. Video Games Playing AI
  15. Deep Learning
  16. Reinforcement Learning
  17. Scientific Simulation
  18. Voice Recognition
  19. Face Recognition

I can easily add 30 more entries without losing my breath, unfortunately I’m already over my word limit.

So, I hope you understand now.

Artificial Neural Networks act as Universal Function Approximators.

As simple as that.

Call for Compassion

If there is one word to sum up this entire 21st century, it is indifference.

  • Syrians crucified in the Middle East.
  • People dying of hunger, thirst, and famine in Africa. Somalia. Ethiopia.
  • North Indians butchering fellow countrymen over a cow.
  • And that same cow treated with more respect than women are treated with in India (famously titled Hindustan/Rapistan).
  • What’s our reaction?

“What’s there for breakfast today? I just finished reading the paper.”
(My words – from the lips of Thomas Cherickal)

We need to identify and act for the oppressed.

Sponsoring AI devices for the poor who can’t afford it all over the world would be an incredible first step.

Any millionaires reading this?

Mark Zuckerberg?
Bill and Melinda Gates? (yes, their foundation already does wonderful work).
Mukesh Ambani?
Amir Khan?
Roger Federer?
Sachin Tendulkar?
Priyanka Chopra?

I know all of us want to leave a good impact on the world. A lasting legacy.

So, my plea to all the super-rich and super-famous – be role models.

Give technology that removes disabilities to the underprivileged (in person - so that you’re not scammed).
Make it a mega-project.

As for me, I’m going to see how many months I need to earn enough to sponsor a prosthetic for a lame child in Africa.

Be human.
Be genuine.
Be compassionate.

There is nothing more beautiful than someone who goes out of their way to make life beautiful for others.
”― Mandy Hale, The Single Woman: Life, Love, and a Dash of Sass

Cheers!

References

Blindness

Deafness

Lameness

Dumbness