paint-brush
7 effortless ways to avoid an AI disasterby@heldtogether
495 reads
495 reads

7 effortless ways to avoid an AI disaster

by Josh SephtonSeptember 8th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

According to Stanford University, <a href="https://thenextweb.com/artificial-intelligence/2017/09/08/this-ai-knows-whether-youre-gay-or-straight-from-a-single-photo" target="_blank">you can predict someone’s sexual preference from a photograph of their face</a>. The study analyzed 35,000 images and built an algorithm that could predict your sexual orientation.

Company Mentioned

Mention Thumbnail
featured image - 7 effortless ways to avoid an AI disaster
Josh Sephton HackerNoon profile picture

According to Stanford University, you can predict someone’s sexual preference from a photograph of their face. The study analyzed 35,000 images and built an algorithm that could predict your sexual orientation.

This knowledge exists now, and it can’t be withdrawn from the zeitgeist. Around the world, people are trying to figure out how to use this information for their own benefit. There are marketing professionals trying to figure out how to apply this in their bid to sell more of what they’re selling. There are insurance companies trying to figure out if they can use this as a datapoint when setting your premiums. There are healthcare companies trying to figure out if they can use this to tailor care specifically for individuals.

But there are also hate groups, trying to figure out how they can take away people’s rights using this (perhaps undisclosed) information. The question is not whether we can do this, it’s whether we should do this. Where do we draw the line?

It raises so many questions, but one stands out: Is this ethical?

In artificial intelligence, we stereotype data. We build a model which finds important characteristics from our training data and then looks for similar indicators when running new data through the model. The downside is that our model is open to bias.

In the USA, courts use artificial intelligence to determine if a defendant is eligible for bail. Unfortunately, the model is biased as it’s been trained with more examples of black defendants becoming repeat offenders than white defendants. If our training data already contains bias, then our model will exhibit similar bias. We can’t quiz the model about its beliefs, in the same way that we can interview a biased judge to expose their bigotry.

This gives us a lot of power to affect people’s lives if applied improperly.

However, it isn’t doom and gloom. There are really useful, beneficial applications of AI. We should take advantage of our data if it’s going to help everyone. Self-driving cars will make our roads safer. Credit card fraud detection keeps the system fair for everyone. Customer service chatbots let companies talk to far more people than would otherwise be possible.

We need to be mindful that we apply our skills for good, not selfish reasons.

We had a similar issue with doctors, afraid that giving them the power to affect our lives would prove harmful. Still, we trust that they’ll help us get better. Although there are legislative methods of enforcing professional standards, they also swear an oath that they’ll share knowledge, act in patients’ best interests, seek help if required, and respect their patients’ privacy. Every doctor starts with this proactive mindset. The oath informs every decision they make, ensuring that they make the best decisions.

It’s time that AI practitioners are instilled with the same sense of propriety. I propose the following oath, and I personally pledge to uphold it from this day.

It’s written to encourage us to do the right thing, even if it’s not the most profitable thing. It encourages learning and teaching. It encourages responsible handling of data. It’s as true to the original Hippocratic Oath as possible, but adapted for the unique challenges of Artificial Intelligence.

I want everyone who builds or deploys AI to read the oath and take a moment to consider its implications. If it makes sense, you can swear the oath too and then bear it in mind as you continue to practice.

I swear to fulfil, to the best of my ability and judgment, this covenant:

  • I will practice for the benefit of all mankind, disregarding selfish interests.
  • I will respect the hard-won scientific gains of those in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow.
  • I will not be ashamed to say “I do not know,” nor will I fail to call in my colleagues when the skills of another are needed.
  • I will respect the privacy of my users, for their information is not disclosed to me that the world may know.
  • I will remember that I do not just handle data, but information about human beings, which may affect the person and their family. My responsibility includes these related problems.
  • I will remember that there is an art to AI as well as science, and that I must remain wary of the context and consequence of projects, notably in their technical, economic and social aspects.
  • I will remember that I remain a member of society, with special obligations to my fellow human beings.

If I do not violate this oath, may I enjoy life and art, respected while I live and remembered with affection thereafter. May I always act so as to preserve the finest traditions of my calling and may I long experience the joy of improving the lives of all mankind.

I’m a software developer based in Birmingham, UK, solving big data and machine learning problems. I work for a health tech startup, finding creative ways to extract value from our customer data. I don’t have all the answers but I’m learning on the job. Find me on Twitter.