Debugging Bias: Questioning The Ethics Of Digital

Written by hyfn | Published 2017/02/11
Tech Story Tags: artificial-intelligence | diversity-in-tech | filter-bubble | algorithms | ethics

TLDRvia the TL;DR App

There was once an idyllic time for us nerds. It was a good gig we had going. That lovable group of misfits, the brilliant but misunderstood nerds had a lot of slack cut our way. The scope of our mistakes used to be much smaller; a botched release or a malicious program could only do so much damage in a world where smartphones were novel and high-speed internet was a luxury. But the genie slipped the bottle. Now tech nerds are catching heat from all directions. Like it or not, we have a lot to answer for these days.

Interest groups, journalists, industry insiders, and others started paying attention to our alarming lack of diversity. They started noticing when algorithms priced things based on what neighborhood you live in. When police facial recognition databases are full of errors, filter bubbles reinforce your cognitive biases, chatbots learn racial hostility they can’t unlearn, and social networks become unfilterable propaganda machines, it’s hard to defend the notion that the tech industry is benign and well-intentioned. The threat potential that tech wields is no longer abstract.

When you dream about using technology to create a better, fairer, more just world, it’s uncomfortable to face up to this reality — that our code is unintentionally racist.

Clearing Misconceptions: Intentional Vs. Unintentional Racism

Two common threads bind the current themes in news media. One is that innocuous, highbrow algorithms, trained by real-world data, are producing provably racist outcomes. The other is that the nerds behind it all should not be let off the hook: “Artificial intelligence will reflect the values of its creators.”

Before we begin to even fix the problem of reckless use of technology, it is important to differentiate the unintended biases that can find their way into tech products or services and using technological platforms for racism.

People see algorithms as racist agents and assume that they were made to be so. As citizens of free world, we have a rational fear of racially unjust systems of power, and it’s already colliding with our terrifying fear of autonomous inhuman overlords.

However, programming anything that works is hard; modeling a concept as complex and culturally endemic as racism into a predictable yet dynamic algorithm is a mind-bending task. It would indeed be ethically monstrous to pursue it.

These algorithms don’t model behavior that way. Instead, a well-known machine learning algorithm is trained with historical data to produce a predictive model, which would then go on to make predictions about new data. Algorithms similar to this are probably suggesting search results and traffic routes for you right now.

How do biases find their way into tech? There are some hard questions we need to ask:

1. Why do we all look the same?

It really doesn’t help that it’s mostly well-paid, mostly white, and mostly men behind the scenes. So why does it matter that white guys are holding the cards? Plenty of ink has been spilled on this, but here’s my take: We don’t know enough people who aren’t like us. We’re unlikely to, for example, have lots of black friends whose pictures we can use to train our robots. We may not perceive as much injustice in the world. We don’t need to think about it.

It’s not the algorithm that’s racist, but rather the data we collect. From a nerd’s perspective, it’s a hard technical reality that “machine learning” isn’t really “learning” — it’s pattern discovery over data. It’s useful for making predictions about the future if you expect the future to be a lot like the past, but garbage in, garbage out; it does not imbue the system with the intuition, judgement, or prudence we attribute to “intelligence.” We have blind spots, and our work reflects it.

2. Why don’t we care what the boss is up to?

The business of digital gives you a feeling of distance from accountability: many programmers feel a separation between the engineering world and what their bosses are up to. Because software is hard, and the world is complex, it’s necessary to narrowly focus on getting your code to work. You can build a comfortable wall to isolate your work from the outside and focus on solving the most challenging problems without stepping back and looking at the bigger picture.

3. Where’s our Code Of Ethics?

Is the problem of separation between creator and enabler unique to programming? No. Many age-old professions have, for various reasons, learned that a code of ethics is necessary to bridge the gap and uphold the profession’s integrity. Doctors and lawyers are bound to pretty specific codes of ethics, and breaking them could make you unemployable. Engineering professionals learned, through bridge collapses, mining disasters, and other catastrophes, that their work could endanger the public welfare and should be bound to some kind of ethical standards.

The fact that there is an organization that recommends ethical guidelines for software development is probably news to a lot of programmers. It was to me. The Association For Computing Machinery is not a name that rings loudly through dev circles, but they publish a code of ethics for software development. It’s mostly concerned with the conduct of the individual developer, in particular with respect to standards and testing, but it does recommend that developers do not “approve” software unless it “does not diminish quality of life, diminish privacy or harm the environment. The ultimate effect of the work should be to the public good.” For example, does it serve the public good to build a system that helps make sentencing recommendations? What if it recommended harsher penalties for blacks? The digital world has failed at that ethical standard long enough to draw attention.

It’s a lot to unpack, but imperative to address for the continual success of the developer community. In future installments of this series, we will explore why it’s hard to act ethically in tech, find out who’s already working to make software more equitable, and dive into specific ethical encounters you’ll have if you’re a working developer today, including what we’re doing as HYFN to chart an ethical path through the tech landscape.

Scott BurtonChief Technical Officer at HYFN


Published by HackerNoon on 2017/02/11