paint-brush
How An Unethical Tech Industry Is Undoing Ethical AIby@leifnissenlundbaek
385 reads
385 reads

How An Unethical Tech Industry Is Undoing Ethical AI

by Leif-Nissen LundbækJuly 27th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In a perfect world, all tech development would be driven first and foremost by ethical considerations. Until we overcome those, ethical tech runs the risk of foundering as window dressing on the haunted mansion of tech’s most predatory, profit-minded, and privacy-obliterating measures.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How An Unethical Tech Industry Is Undoing Ethical AI
Leif-Nissen Lundbæk HackerNoon profile picture

In a perfect world, all tech development would be driven first and foremost by ethical considerations. In the world we live in, ethics-driven tech is its own field, particularly as it relates to privacy, cybersecurity, algorithms, and data mining. These are all good things, obviously. But ethics-led tech and, more specifically, ethical artificial intelligence are fundamentally hamstrung by the large political and technological conditions of our moment. Until we overcome those, ethical tech runs the risk of foundering as the feel-good window dressing on the decrepit haunted mansion of tech’s most predatory, profit-minded, and privacy-obliterating measures.


Ethical AI is, at its core, not evil. It doesn’t misappropriate data or manipulate users. It’s not, as a baseline, inadvertently homophobic or sexist or racist, such as when the Optum-built healthcare system regularly recommended that doctors spend less time treating Black patients than white patients based on faulty algorithm programming. It also has a number of sets of general guidelines now, thanks to the efforts like the Principled Artificial Intelligence Project.


Most organizations who commit to ethical AI typically do so by pledging to adhere to a number of basic principles articulated by academics and researchers over the past 15 years. As a rule, these principles tend to resemble those of bioethics. In an attempt to unify the various sets of principles, Luciano Floridi and Josh Cowls published “A Unified Framework of Five Principles for AI in Society” in 2019, distilling and summarizing the various principles into five main areas: beneficence, non-maleficence, autonomy, justice, and explicability.

These are lofty and necessary goals. They’re also so admirable as to make for great PR. Companies like Google, Meta, Intel, Microsoft, and other tech giants have publicly supported projects like The Partnership on Artificial Intelligence to Benefit People and Society. Does that make them ethical companies? Hardly. And beyond the individual companies, we’re living in societies actively undermining ethical AI measures. Until that’s addressed, ethical AI will stall as a project.

Ethical AI Undone

There’s not a single ethical AI principle that shouldn’t apply to tech everywhere. There’s also not a single one that actually does.


Beneficence (a.k.a. do good) and non-maleficence (a.k.a. do no evil) are arguably the underpinnings on which the ethical AI concept rests. There’s no overstating the simple morality at play here. That’s why it’s almost embarrassing that this is a baseline the rest of tech can’t seem to reach. There are obvious examples, like the development of highly sophisticated weapons systems that routinely and regularly kill civilians. Another example is the willful neglect by Instagram, Twitter, and Facebook in terms of spreading disinformation or cracking down on hate speech. Democracies have crumbled over less, and more than a few are teetering now. Amazon will sell the footage collected by your Ring doorbell camera to the cops without your consent, enabling state surveillance of you and your neighbors. That’s genuine, true harm.


If the internet sucks now – which is an increasingly common opinion – it begins with this. Algorithms have done harm, however unintended. They’ve made us meaner and more polarized. They certainly haven’t promoted well-being or preserved our dignity, which would form the baseline of beneficence. They’ve raked in the data and sold it for profit, which constitutes actual harm through privacy violations.


These privacy violations are connected to everything. Autonomy is the power to decide, and no one would actively choose to let Facebook and Google follow them around the web. The major tech conglomerates thought, “What? It’s not like they’re not going to agree to the terms and conditions to use the service.” And they were right. But there’s something wildly cynical about making near-unfettered data collection the price of using a service. Ethical tech does not mean giving people only two options: opt-out of a service altogether, or have their data mined to high heaven.


Justice sounds noble, but it’s rather straightforward in ethical AI terms. It’s about avoiding bias and unfairness of all kinds. It’s a bulwark against discrimination. The Asilomar Principles of ethical AI specifically mention “shared benefit” and “shared prosperity,” which feel positively utopian compared to the current climate.


And last but not least, the principle of explicability holds the rest together. This combines transparency (how it works) and accountability (who is ultimately responsible for what it does). Realistically, who answered for Cambridge Analytica? Who’s answered for the physical toll that digital devices have wrought on our minds and bodies?

The Surveillance State Conundrum

The modern surveillance state apparatus was not developed by the state. It was developed by tech companies. The problem with ethical AI is its inability to overcome the conditions of the surveillance state. Ethical AI development needs to be a society-wide project in order to matter. I have an unshakable belief that ethical AI is essential and necessary, but it is futile so long as law enforcement collects DNA and facial recognition data from ancestry tests and smart doorbells. So long as serving target ads and raking in profits reigns supreme, industry-wide commitment to beneficence and non-maleficence will be a pipe dream.


It’s not that ethical AI risks being undone by the current climate. It’s that truly ethical AI cannot exist in these conditions. Ethical AI risks becoming a moot point as corporate and state surveillance expands through algorithmic-based data mining and technologies. This isn’t to say that we should throw our hands up and stop designing AI programs with these principles. But until these principles are applied to all of tech, and until we can undo the damage already done, we’re going to be fighting an uphill battle.