paint-brush
The Inevitable Symbiosis of Cybersecurity and AIby@michaelgradek
206 reads

The Inevitable Symbiosis of Cybersecurity and AI

by Michael GradekDecember 20th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

While improvements in AI and Deep Learning move forward at an ever increasingly rapid rate, people have started to ask questions. Questions about jobs being made obsolete, questions about the inherent biases programmed into the neural networks, questions about whether or not AI will eventually consider humans as dead-weight and unnecessary to achieve the goals they've been tasked programmed with.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - The Inevitable Symbiosis of Cybersecurity and AI
Michael Gradek HackerNoon profile picture

While improvements in AI and Deep Learning move forward at an ever increasingly rapid rate, people have started to ask questions. Questions about jobs being made obsolete, questions about the inherent biases programmed into the neural networks, questions about whether or not AI will eventually consider humans as dead-weight and unnecessary to achieve the goals they've been tasked programmed with.

And while those are valid questions, I believe there is an area of AI that has not received as much attention. An area which should be addressed as early as, like, right now.

What are we doing to keep our AI algorithms safe from malicious attacks?

I like to think about this problem using an analogy from the late 90's and early 2000's. Back in the day when dynamic websites were new we used to code database queries with no consideration about security until MySQL injections became the de-facto way of gaining access into a system.

We are on the brink of literally trusting our lives with AI algorithms for the first time in history. Think self-driving cars. Tesla extremely close of deploying full self-driving cars at a massive scale. I'm certain that statistically the Tesla self-driving car will drive safer than the average human, but is Tesla "escaping" their self-driving algorithms? Are there vulnerabilities in their systems that could be exploited by malicious attackers and cause a tragic accident?

Researchers are hard at work to understand how different neural networks can be attacked and yield unexpected results. This video from the YouTube channel Two Minute Papers is a true eye opener.

In short: changing 1 pixel in an image can be enough to mis-lead a neural network and make it yield results with are completely off.

Although these so called "one pixel attacks" are not trivial to pull off (it takes a lot of analysis to figure out which pixel to change, and what color to set it to), it's not hard to extrapolate and wonder if current AI systems are vulnerable to these types of attacks, especially those in which we trust our lives with. No system is ever 100% safe so I think it's safe to assume any AI algorithm will have some degree of vulnerability that could potentially be exploited.

And that begs the question: could someone exploit such a vulnerability (or a similar one) to cause real harm to human life? Are companies paying attention to these problems that will inevitably surface as AI gets more widespread adoption?

A quick Google search for AI and Cybersecurity will show thousands of pages talking about how AI is revolutionising cybersecurity, but it's extremely hard to find a page which discusses how cybersecurity is helping improve AI and making it safer. Browsing the websites of the top cybersecurity companies will render similar results: no mention of applying cybersecurity research to find vulnerabilities in AI algorithms.

This is, however, not surprising at all.

AI and Deep Learning is a relatively new discipline and is only starting to take off relatively recently.

Moreover, there are still few use cases where it is paramount to guarantee the AI algorithms have no life-threatening vulnerabilities. But as AI takes over more and more tasks such as driving, flying, designing drugs to treat illnesses and so on, AI engineers will need to also learn the craft of, and be, cybersecurity experts.

I want to emphasise that the responsibility of engineering safer AI algorithms cannot be delegated to an external cybersecurity firm. Only the engineers and researchers designing the algorithms have the intimate knowledge necessary to deeply understand what and why vulnerabilities exists and how to effectively and safely fix them.

External cybersecurity companies may play a role of trying to "pen test" the algorithms, but ultimately it will be up to the engineers developing them to fix them.

Naturally this can only happen when AI engineers master the craft of security applied to AI algorithms. If AI is a relatively new field, security applied on AI algorithms is even newer and hiring people with such expertise will be a massive challenge. But inevitably AI engineers will need to take security under consideration and proactively test their algorithms for possible malicious attacks and also become security experts.

Companies and their leaders will also need to start to take this topic seriously. No organisation wants to create unsafe products to begin with, and surely no organisation wants to be on the news for how easily their AI systems were fooled or what fatal accident their algorithms caused.

So, if you're a company that designs AI algorithms which are applied to critical areas of people's lives: deploy a culture of safety and security inside your AI engineering teams.