AI and ‘The Common Good’

Written by BenGilburt | Published 2019/03/08
Tech Story Tags: common-good | ai | artificial-intelligence | ai-common-good | common-ai

TLDRvia the TL;DR App

Understanding ‘The Common Good’ and how it relates to AI

‘The common good’ is being increasingly referenced in relation to AI on all levels. It’s in the headline of the House of Lords Artificial Intelligence Committee report, stating that AI should be developed for the common good of humanity. It’s referenced in the EU HLEG ‘Draft Ethics Guidelines’… 8 times… It was even highlighted in a conversation between Microsoft President Brad Smith and Pope Francis a couple of weeks ago.

But what does it mean?

Superficially we hear the common good and assume it means ‘things which are good for everyone’. And you’d be right (TL;DR, just close this tab now and save yourself 10 minutes!). But it’s actually quite a lot more nuanced than that. I want to tell you a little bit more about the common good, through the lens of Jean Jacques Rousseau, John Rawls, and a little bit of Harper Lee.

By understanding more about the common good you’ll be able to see how it can make our society more inclusive, as well as avoid some of the risks that the common good has presented historically.

We will begin with Rousseau to gain a grounding on the common good. Rousseau introduces the common good in his text ‘The Social Contract’, where he describes three types of ‘will’ in a society.

The Particular Will

First, we have the particular will. This is a desire held by a single person or group of people (but not everyone!) which unfairly benefits the group holding that particular will. We might think of something like a community of business leaders for having a will high-income tax reductions. This might favour that group, but it’s not beneficial to others in the community. The particular will might even include things which seem destructive to the group concerned. It could be self-destructive, but it could be argued that the interested groups still has something to gain from that (e.g. self-destruction offers a feeling of release, however transient).

Surely there’s a better way…

The Will of All

The Will of All differs from the particular will in one way only. It’s shared by all members of a community. Really, it’s just a particular will shared by everyone, or a collection of particular wills — all of which unfairly benefit certain groups of people to the detriment of others. A will of all is most likely reached by the influence of media, advertising or by charismatic people swaying the opinions of many others.

That’s not much better…

The General Will

The General Will is quite different, and it’s where we find the common good. The general will works towards the actions which are simultaneously most beneficial to the individual and the community as a whole. Though a particular will may superficially appear to be better for an individual, by adding in an element of injustice to allow certain people to take a little bit more as the sacrifice of others, it never would be. The general will and common good should either ensure that they’re better off than they ever could be following a particular will. It may offer a temporary advantage, but the potential advantage will be outweighed by the increased risk of the common good failing and the decline of the state as a whole which would follow.

I would like to highlight here the first risk for the common good. Imagine, if you will, a hypothetical world where a social media platform exists where citizens have been cataloging their lives and contacting their friends for the past 15 years. Imagine that social media platform even owns a popular OTT messaging service. We might end up some crazy situation where that social media platform sells our data to a Cambridge based analytical firm who uses that data to shape global election campaigns.

Tech companies today have the power to influence our opinions and values. This is not a manifestation of the general will — It’s the influence of technology and data to create a will of all. We should not expect this to work towards the common good or create a more inclusive society.

The common good should also be a positive force for inclusivity and diversity in our communities. I don’t believe the common good necessitates that money be shared equally amongst all members of a community, what we really want is for opportunity, access and utility to be shared equally amongst the community, and that might well mean that we need to spend extra in some areas.

John Rawls has something to say about this, and it relates to the Veil of Ignorance. It goes something like this — Imagine you have been given the task of dividing up certain basic goods which everyone in a community needs access to, and you’re a member of that community. You’re a rational and self-interested creature, but also ignorant. You’re ignorant of your own talents, and therefore you don’t know where you’ll fall within that community. You could be rich, average or poor. Rawls argues that rationally you would want to share these goods evenly amongst all people, because you don’t know which category you would fall into and it’s better for you as a self-interested actor to ensure the worst possible outcome is at least moderately comfortable than to make an already lavish situation just a tiny bit more lavish. Rawls continues, asking what we should do with any remaining goods that all should have access to which cannot be divided equally. Here, the rational, self-interested actor would be encouraged to give anything which cannot be shared evenly to the poorest in the community. Why? Think of this in the context of utility, where this good is worth £10. Imagine the poorest in the community already have £10 of utility, this doubles their effective utility. That same amount only makes a 10% difference to the middle bracket with £100, and almost no difference to the wealthy with £1,000.

Bringing this back to AI

To me, this means we are not aiming for an equal distribution of spending on every type of person. It means that we can and should spend more and put more effort into bringing certain areas of society up with the rest — so that we all have equal access and utility to be gained from AI. The real good that we are looking for is diversity, creativity, innovation and all the good things that come from an inclusive society.

** This is a comparison of resource equality compared to something like the ‘Equal Capacity Approach’ (Sen / Nussbaum) or ‘Equal Opportunity’ (Arneson). Too much to cover in this blog but Google is your friend!**

We want to make it so that those people’s opinions can be heard just as well as anyone else’s because the common good isn’t about those with a bigger and better soapbox telling others what to believe. It’s about hearing everyone equally and establishing convergent values.

And no, convergent values do not mean that everyone will agree on everything that happens. That’s a fantasy world. It’s much more pragmatic than that. My favorite way of describing it; Imagine 5 friends want to go out on a Saturday night — 4 want to go bowling and 1 wants to go out for a pizza. The 1 friend who wants to go for a pizza is better off bowling with friends than going for pizza alone. Even though the activity is their second choice, the convergent value is time spent with friends. I fully expect that people would disagree with specific decisions made through the common good, but if they desire to be a part of that society they should agree with the convergent values it establishes.

So, we have the common good. But what risks should we be aware of, and how can it go wrong?

I highlighted earlier how we can mistake the will of all for the general will, and how this risk is heightened when our human influence is amplified by AI and other technologies. There is, however, a second problem which we’ve seen play out throughout history.

We seem to have a nasty habit of finding ways around the general will. Rousseau is a prime example of this. A second philosophical concept he’s known for is the ‘natural man’. A theoretical human concept of a man (not mankind, notably), a hunter-gather type who is quite isolated, incredibly strong and doesn’t get ill. Women only exist in this picture as something natural man sees in passing, procreates with and leaves. In fact, he places a lot of the blame for the corruption of this (already ludicrous concept of) natural man on women. Rousseau’s concept of the general will doesn’t intend to encompass the will of women, because Rousseau simply relegates women to a sub-human category where their will doesn’t matter for the good of humanity.

The Trial of Tom Robinson (Credit: Universal Pictures)

I recently read ‘To Kill a Mockingbird’, a great book which shows this same problem in multiple levels. We have two stories running in parallel, both through the eyes of children, primarily Scout Finch. One is their curiosity, and frankly harassment of a mysterious neighbour, Boo/Arthur Radley. The other is the trial of Tom Robinson — A black man in their local community who has been wrongly accused of rape. Their father is defending Tom Robinson so they get a unique window into the trial, and see how Tom Robinson isn’t treated as a man who is wrongly accused of rape, but a black man who is wrongly accused of rape. The children are distraught by this because Tom is just like any other person. This is highlighted towards the end when Scout comments on Miss Stephanie Crawford. She can’t understand how Stephanie can hate Hitler so much, yet ‘be ugly about folks at home’, having forgotten about the treatment of Tom Robinson within a couple of days. The children, seemingly, can see the general will and the common good, which the adults have lost. Remember though, I mentioned Boo Radley, the mysterious neighbour who never leaves the house. The children think up all kinds of crazy stories about him, that he’s evil, and torment him throughout the book, trying to trick him into leaving the house, trying to send notes on a fishing rod through the window. At the end of the book he saves the children from a killer and Scout realises that after all he’s a good man, just afraid of the outside.

Even the children failed to see the common good to start with. What they needed was to see Arthur/Boo face on to see it’s not so scary.

‘An’ they chased him ’n’ never could catch him ’cause they didn’t know what he looked like, an’ Atticus, when they finally saw him, why he hadn’t done any of those things … Atticus, he was real nice…’

His hands were under my chin, pulling up the cover, tucking it around me.

‘Most people are, Scout, when you finally see them’

Harper Lee, To Kill A Mockingbird

Things to remember

The common good is a little bit more nuanced than it might first seem. It does allow for us to allocate resources unevenly across society, but only in instances where it helps to bring people up to the same level of access as others — It should be promoting accessibility and inclusion.

Be mindful of the influence of big tech companies and the media, touting their position as the common good when you’re really looking at particular will or the will of all, something which would increase inequality in our society.

Remember where we have made mistakes in the past, excluding groups of people from our consideration in the common good. We might be excluding different people to the past, or we might be excluding the same people in different ways. In To Kill A Mockingbird it took Scout years to meet Boo Radley face on, to see him as a person, not as the scary fiction they imagined. We should do the same — Face our fears, talk to the people we never mix with and see what the world looks like from their window before assuming we understand the common good.


Written by BenGilburt | I write marginally better than GPT-2 (small model only)
Published by HackerNoon on 2019/03/08