paint-brush
Looking Within to Battle Intrinsic Bias in AI by@adam-rogers

Looking Within to Battle Intrinsic Bias in AI

by Adam RogersDecember 6th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The widespread adoption of artificial intelligence (AI) technology risks hardcoding these biases into the future of work. Amazon's now-retired recruiting engine was trained on resumes spanning a 10-year history. IBM’s Watson reached its cancer treatment decisions contributed to its unfortunate and high-profile failures. Executives must understand how AI works in general, how to design ethical AI and how any active algorithms are augmenting companywide decision making. This ability to fully understand AI decision making is -- or should be -- a core competency of today's C-suite.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Looking Within to Battle Intrinsic Bias in AI
Adam Rogers HackerNoon profile picture

Intrinsic bias. We’re all guilty of it; it’s part of what makes us human. But we should always strive to be better and fairer, and to eliminate these biases when they arise. This is especially true in today's workplace, where the widespread adoption of artificial intelligence (AI) technology risks hardcoding these biases into the future of work.

As I’ve discussed before, AI is not inherently good or bad. It is a tool, and like all tools, it can be used in a multitude of ways. But we're quickly approaching a tipping point where crucial business decisions are based almost entirely on AI recommendations. If you're not concerned about AI integrity, you should be.

If we are not judicious in our approach, we risk further entrenching the same intrinsic biases we have been working so hard to overcome into the makeup of our technology. Fortunately, there are clear opportunities to address and overcome this bias in our organizations. Here are a few steps you can take to ensure a better future at work and beyond.

Take a Long Hard Look in The Mirror

It's cliché but true: True change comes from within. In order to build a fair and ethical workplace, you must first admit that bias exists and commit to rectifying it. By facing a potentially difficult reality, you're better prepared to face future challenges.

Intrinsically biased AI can have devastating consequences. Consider Amazon's now-retired recruiting engine, which was trained on resumes spanning a 10-year history. Based on the data it was fed, the algorithm learned a distinct male preference. Unfortunately, the team assumed that their historical data was free of bias, which resulted in a perpetuation of inequality and a significant PR disaster.

All companies have biases. Successful companies face them. Ask yourself honestly: Where is our bias? Why is it occurring? How can we ensure it won’t happen again?

Avoid 'Black Box' AI

Imagine you tasked your team with developing new compensation guidelines. After their pitch, you'd have some questions. What data did you use? What assumptions did you make? Which best practices did you follow? You would be justifiably suspicious if they couldn't explain their conclusions. The same explainability should be expected of AI.

Widespread misunderstandings about how IBM’s Watson reached its cancer treatment decisions contributed to its unfortunate and high-profile failures. To make matters worse, the lack of visibility into how and why Watson made its recommendations made doctors less likely to trust it -- even in cases where it could have been helpful.

In order to effectively and ethically engage with technology, executives must understand how AI works in general, how to design ethical AI and how any active algorithms are augmenting companywide decision making. This ability to fully understand AI decision making is -- or should be -- a core competency of today's C-suite.

Review, Revise, Repeat

Finally, don't be complacent with your AI. Institute systems to continually check for bias. Falling into the trap of thinking your AI (or any AI) has the only real answer is tempting, but dangerous.

AI that can fully understand context is still a long way off. Until then, there is always the risk of receiving answers that, while not technically wrong, may be inappropriate or fail to account for outside variables that only the human mind is capable of recognizing. Placing complete trust in AI as an indisputable source of truth could hold us back from eliminating biases and creating a fairer workplace and society.

A primary aim of AI is to save time, so checking every answer is clearly counterproductive. But iteration is necessary for improvement. Ethical reviews for bias should be performed with at least as much regularity as checks for performance, efficiency and accuracy. You won’t always be able to predict where and when bias might occur, and how we define fairness is in a constant state of flux.

AI could be humanity’s greatest tool in eliminating systemic bias, but we’ll need to be honest with ourselves and each other to confront our pre-existing biases. If we commit to introspection now, we can ensure these technologies become forces of good for business and for humanity.

Photo by ali syaaban on Unsplash. Originally published on Forbes.