Is A.I. a revolution or a war? A god or a pet? A hammer or a nail? Nowadays, A.I dictates on social media, and both on and offline. An algorithm can technically and books, conceivable, make , compose classical songs and . Beyond the arts, it also has the potential to encourage better , make , and even solve some of humanity’s . It’s intertwining with , , , , , , , , , , … the list goes on. Do we really need more metaphors to describe it? what information is presented to us which ads we see, what prices we’re offered write analyse beat humans at about every game movies help magicians perform better tricks decision-making medical diagnoses most pressing challenges criminal justice retail education recruiting healthcare banking farming transportation warfare insurance media Yet, we’re so often busy discussing the ins and outs of whether A.I CAN do something, that we seldom ask if we SHOULD design it at all. Companies and governments alike have come to realise that are capable of , and are studying various ways to deal with potential fallout without impacting their bottom line or strategic geopolitical advantage. They have come up with dozens of “principles”, , failing to even agree on a basic framework. Discussing , , mass surveillance, authoritarianism is paramount; but these discussion cannot take place before and are agreed to. This is where ethic comes in. statistics on steroids great harm each more unenforceable than the next war automation key ethical principals red lines As such, below is a “quick” guide to the discussions surrounding A.I and ethics. It aims to help democratise conversations : we do not necessarily need smarter people at the table ( ), but we need a bigger table. Or more tables. Or more seats. Or some sort of a video-conference solution. and anything I write will not be news to an expert DO I hate metaphors. Ethic can Mean Many Different Things Before we dive into the contemporary discussion about ethics, we first need to understand what ethic is. Ethic has a pretty straightforward dictionary definition : “ ”. moral principles that govern a person’s behaviour or the conducting of an activity That’s about as far as anyone can get before contrarians such as myself come to ruin the fun for everyone. You see, even if we separate (the study of ethical action) from its lamer cousins and , there’s still no one definition of what is good/bad and/or wrong/right. Indeed, . normative ethics meta-ethics applied ethics what is good may be wrong, and what is bad may be right Here are the schools of thoughts to know about in order to best understand why current propositions on A.I ethics have little to do with moral principles : ; TL;DR = . Close cousin : . Consequentialism The greatest happiness of the greatest number is the foundation of morals and legislation, aka “the ends justify the means” utilitarianism ; TL;DR = ( ). Close cousin : . Deontology It is our duty to always do what is right, even if it produces negative consequences. “What thou avoidest suffering thyself seek not to impose on others” Epictetus, aka, the guy with the most epic name in philosophy — also a Stoic Kantianism ; TL;DR = . Hedonism Maximising self gratification is the best thing we can do as people ; TL;DR = . Moral intuitionism It is possible to know what is ethical without prior knowledge of other concepts such as good or evil ; TL;DR = Pragmatism Morals evolve, and rules should take this into account. ; TL;DR = State consequentialism Whatever is good for the state is ethical. TL;DR = Close cousin : . Virtue ethics ; A virtue is a character trait that stems from the prioritization of good versus evil through knowledge. It is separated from an action or a feeling. stoicism : if a company or government tells you about its ethical principles, it is your duty to dig and ask which ethical branch they’re basing these principles on. Much information can be found in such definitions. Lesson 1 It’s important to ask this, because as we see below, institutions like to use the word ethic without actually ever going near anything resembling a moral principle ( ). The good news, however, is that there . please refer to the title of this article for a regular sanity check are literally no correlation between knowing a lot about ethics and behaving ethically “Ethics Theater” Plagues Companies . At least, that’s the business philosophy that has been espoused for the past 50 years. As such, companies have no incentive to do either “right” or “good”, unless their profits are at risk. All that matters to them, technically, is that customers them as doing good/right. Ethics theater is the idea that companies will do all they can to to be doing their best to behave ethically, without doing so, in order to prevent consumer backlash. A perfect way to do this is by announcing grand, non-binding principles and rules in no way linked to actual ethics, pointing to them should any challenge arise. Companies exist to reward shareholders see APPEAR Below are such principles, as defined by a few large A.I companies. This is in no way exhaustive ( ), but provides an insight into corporation-sponsored ethics-washing. These rules generally fall into 4 categories. yet is exhausting Accountability / Responsibility “ ” ( ), “ ” ( ), “ ” ( ); “ ” ( ). Designate a lead AI ethics official IBM AI designers and developers are responsible for considering AI design, development, decision processes, and outcomes IBM Be accountable to people Google AI systems should have algorithmic accountability Microsoft : firstly, and like much of the points below, , even as some of the papers actually . Secondly, nowhere is it written that executives should be , giving them free reign to do whatever the hell they want. Indeed, few laws exist to reign in A.I, but this is literally why we have ethics; nowhere is it stated by which standards the companies will be held accountable or responsible for. Deontology? Consequentialism? It’s anyone’s guess at this point. Why it’s B-S none of this is about ethics per se have the word itself in their title accountable to the law of the land Transparency “ ” ( ), “ ” ( ), “ ” ( ), “ ” ( ). Don’t hide your AI IBM Explain your AI IBM AI should be designed for humans to easily perceive, detect, and understand its decision process IBM AI systems should be understandable Microsoft : I won’t go into much detail here because this is more technical than theoretical ( ), but . In order to have full transparency, companies would have to make parts of their code available, something that has been discussed but is ( ) fiercely opposed. The other solution comes from the ”, and concentrates on input rather than output. Said right mandates that users be able to demand the data behind the algorithmic decisions made for them. This is a great idea, but is not implemented anywhere outside of Europe. Why it’s B-S here’s a quick guide A.I is a black box by its very nature obviously GDPR’s “right to explanation Fairness / Bias “ ” ( ), “ ”( ); “ ” ( ), “ ” ( ). Test your AI for bias IBM AI must be designed to minimize bias and promote inclusive representation IBM Avoid creating or reinforcing unfair bias Google AI systems should treat all people fairly Microsoft . That is the simplest definition of A.I bias. Such a buzzword helps companies shy away from hard topics, such as sexism, racism or ageism. God forbid they have to ask themselves hard questions, or be held accountable for the data-set they use. We have every right (duty) to demand what bias are exactly being addressed, and how. Why it’s B-S : A system created to find patterns in data might find the wrong patterns Data and Privacy “ ” ( ); “ ” ( ), “ ” ( ). AI must be designed to protect user data and preserve the user’s power over access and uses IBM Incorporate privacy design principles Google AI systems should be secure and respect privacy Microsoft If they really cared, they would have implemented the European standard ( ). They have not. Case closed. Why it’s B-S : all hail the GDPR Ethics is only truly mentioned twice in the many reports I’ve read : “ ” ( ) “ ” ( ) AI should be designed to align with the norms and values of your user group in mind IBM We will not design or deploy AI in the following application areas: technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints. Google This tells us that IBM believes in (fair enough), while Google is a company. This is odd, because “ ”, the company’s longtime slogan, is technically . Such a dichotomy highlights a glaring carelessness: one of the world’s largest company is defining A.I principles that may have a far reach in society while simultaneously going against its internal culture. . pragmatism consequentialist do no evil deontology This sounds like over-analysis until you realise that there has been many internal employee revolts within Google over the past few months for this very reason You may have noted that only three companies are named above (Google, IBM, Microsoft). It’s because the other major A.I companies have yet to produce anything worthy of being picked apart, choosing instead to invest in think-tanks that will ultimately influence governments. This point highlights a major flaw common to all principles: . Why then, do companies bother with ethics theater? The first reason, as explained above, is indeed to influence governments and steer the conversation in the “right” direction (see below the similarities in standards between company and government priorities). Secondly, it’s good to be seen as being ethical by customers and employees, so as to avoid any boycotts. Thirdly, and maybe most importantly, there is big money to be made in setting a standard: none offer that companies subject themselves to enforceable rules Patents x universal use = $$$. : Companies know very little about ethics, and have no incentives to take a stand on what is good or right. Corporate ethics is an oxymoron. Lesson 2 As such, , as corporations are . governments need to step up unlikely to forego profit for the sake of societal good Governments are Doing their Best There are many government-published white papers out there, but they are either vague as all hell, or shamefully incomplete. Furthermore, many see A.I through the lens of economic and geopolitical competition. One notable exception is the In order to get an overall look at what countries believe A.I ethics should be, I’ve put their principles into 7 categories, most of which closely resemble those highlighted by the above analysis of corporations. clear emphasis on ethics and responsibility in the EU’s A.I strategy and vision, especially relative to the US and China (both morally discredited to the bone). Note that this is merely a (relevant) oversimplification of thousands of pages written by people much smarter and more informed than myself. I highly recommend reading the linked documents as they provide in-depth information about the listed principles. Accountability / Responsibility “ ” ( ); “ ” ( ) “ ” ( ); “ ” ( ) ; “ ( ) ; “ ” ( ) ; “ ” ( ) “ ” ( ). Principle of Accountability by Design UK Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems (…) Australia All AI systems must be auditable Norway DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities US DoD the principle of liability” China Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes (…) EU Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles OECD those who design and deploy the use of AI must proceed with responsibility and transparency the Vatican Accountable TO WHAT?! TO WHOM?! How is this question so very systematically avoided? Transparency “ ” ( ) ; “ ” ( ); “ ” ( ) ; “ ( ) ; “ ” ( ) ; “ ” ( ) ; “ ” ( ). Process and outcome transparency principles UK There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, (…) Australia AI-based systems must be transparent Norway The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology (…)” US DoD the data, system and AI business models should be transparent (…) EU There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them OECD in principle, AI systems must be explainable the Vatican How about we start by forcing companies to reveal whether or not they are REALLY using A.I? Fairness / Bias “ ” ( ) ; “ ” ( ) ; “ ” ( ) ; “ ” ( ) ; “ (…)” ( ) ; ( ). Principle of discriminatory non-harm UK AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups Australia AI systems must facilitate inclusion, diversity and equal treatment Norway The Department will take deliberate steps to minimize unintended bias in AI capabilities US DoD Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination EU “do not create or act according to bias , thus safeguarding fairness and human dignity” the Vatican As a reminder, bias can be avoided by ensuring that the data input is representative of reality, and that it does not reflect reality’s existing prejudices. Data and Privacy “ ” ( ); “ ” ( ) ; “ ” ( ); “ ” ( ). AI systems should respect and uphold privacy rights and data protection , and ensure the security of data Australia AI must take privacy and data protection into account Norway besides ensuring full respect for privacy and data protection , adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data EU AI systems must work securely and respect the privacy of users the Vatican Oh, China and the US aren’t on that list ? Cool, cool, cool… just a coincidence, I’m sure. I’m sure it’s also a coincidence that 3 completely different organisations came up with principles that are VERY similarly phrased. Safety / Security / Reliability “ ” ( ); “ ” ( ); “ ” ( ) ; “ ” ; “ ”( ); “ ” ( ) ; “ ” ( ) ; “ ” ( ). Accuracy, reliability , security , and robustness principles UK AI systems should reliably operate in accordance with their intended purpose Australia AI-based systems must be safe and technically robust Norway The Department’s AI capabilities will have explicit, well-defined uses, and the safety , security (…) The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences (…) US DoD AI systems need to be resilient and secure (…) EU AI systems must function in a robust, secure and safe way (…) and potential risks should be continually assessed and managed OECD AI systems must be able to work reliably the Vatican Easier said than done, when a simple sticker can make an algorithm hallucinate . Stakeholder inclusion / Societal good “ ” ( ); “ ” ( ); “ ” ( ); “ ( ) ; “ ” ( ) ; “ ” ( ) ; “ (…)”( ). Stakeholder Impact Assessment Principle UK AI systems should benefit individuals, society and the environment Australia AI must benefit society and the environment Norway principle of human interests” China AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly EU AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being OECD the needs of all human beings must be taken into consideration so that everyone can benefit the Vatican Hey, remember when a facial recognition software “could tell” your sexual orientation ? In Russia? Rights “ ” ( ); “ ” ( ); “ ” ( ) ; “ ( ) ; “ .” ( ) ; “ ” ( ). AI systems should respect human rights , diversity, and the autonomy of individuals Australia When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or output of the AI system Australia AI-based solutions must respect human autonomy and control Norway The “consistency of rights and responsibilities” principle” China AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights EU AI systems should be designed in a way that respects the rule of law, human rights , democratic values and diversity, and they should include appropriate safeguards (…) to ensure a fair and just society OECD Hey, remember when Facebook labeled 65,000 Russian users as ‘interested in treason’? 5-point analysis ; much can be said from what has been omitted by certain countries. This lack of consensus is also worrying because an entity deciding between several international guidelines, its home country’s national policy, and recommendations from companies and nonprofits might end up doing nothing. Only the EU, Norway and Australia deal with all 7 principles , and they rarely stray far from one another. This highlights a very real risk of (something that would be beneficial to the private sector). For example, nowhere is the mentioned, when A.I could easily be used to people one way or another (say, ). No list of principle ventures outside of these 7 points groupthink right to self-determination nudge during an election : no country has forbidden itself from certain A.I uses, and none of the principles are legally binding. FYI, strong regulation looks like this: Red lines are shamefully absent . As are any relevant which could measure these principles. Who cares if some things are currently technically out of reach? Claiming so means misunderstanding the very definition of strategy (also, threaten to fine companies and they’ll find find technical solutions pretty darn quickly). Technical definitions are entirely absent from the discussion KPIs The is not clear at first. Neither are their necessity, lest we ask “ ”. ? Are there orders of importance ? What happens if foregoing privacy rights is beneficial to society ? When we start dealing with multiple, often competing, objectives or try to account for intangibles like “freedom” and “well-being,” a satisfactory mathematical solution doesn’t exist. This is where a clear ethical philosophy would be useful : if is prioritised (as is generally the case in China), this at least gives us a clue as to what will be prioritised ( ). lack of ethical guidelines what happens if one principle goes against another? Are they ranked state consequentialism Asimov’s three laws of robotic were pretty great at this : Governments go a step further than companies in setting relevant principles. However, they still lack the courage of their principles, as well as the technical know-how to make these principles enforceable. Lesson 3 Ethic is Easy, But Courage isn’t Now that we’ve established the basics of what ethics has to offer ( ), and that we’ve analysed various attempts by companies and governments alike, below are a few recommendations that base themselves not only on ethics, but also on with regards to the BIG issues (war, politics, autonomous cars, justice…). I mention because this is what is missing in the current A.I discourse. The principles below have probably been thought of before, but were likely dismissed because of what they entail (loss of competitiveness, strategic advantage, cool guy points…). I risk nothing by bringing them up, because I do not wield any real power in this conversation; I may not hold the same discourse were I representing a people/a company. not a whole lot at face value courage courage Principle of Rationality famously wrote “ ”. This is particularly true when it comes to A.I; not because people are forcing algorithms to be bad, but because our actions may create a world within which bad behaviours have been enshrined in algorithms, forcing oneself to adopt said behaviours or suffer as a consequence (ex : a woman removing gendered words from her CV) ( ). referred to this as the : a decision “ . Under the Principle of Rationality, key golden rules would be enforced within all A.I companies through and , ensuring that even if people lose their mind, algorithms enshrining that madness are not built. May I recommend starting with Sartre Hell is other people you may need a primer on machine learning if this confuses you Tocqueville Tyranny of the Majority which bases its claim to rule upon numbers, not upon rightness or excellence” public consultation technical consulting this little-known piece of history deontological Principle of Ranking Let’s assume that the principle above is applied worldwide ( ). How can companies deal with competing fundamental rights when creating an algorithm? For example, can we forego article 9 and 12 to better enforce article 5 ? Can we produce an A.I that would scour communication channels in order to find potential criminal activity ? These question is the very reason why we need an ethical stand, which would help develop a stable rank of values, ethics and rights, wherein some would stand above others. Take the infamous , for example, and apply it to autonomous cars. Given the choice, should an autonomous car prioritise saving two pedestrians over a passenger? What if the passenger is a head of state? What if the pedestrians are criminals? Choosing one school of thought, as hard as this may be, would help create algorithms in line with our beliefs (team Deontology FTW). ha! Trolley Problem Principle of Ambivalence The above example is not random: the largest study on moral preference ever was started in 2014, encouraging users all over the world to respond to a number of variations of the “ ”. The results, though expected, are clear: different cultures believe in different things when it comes to ethics. Japan and China, for example, are less likely to harm the elderly. Poorer countries are more tolerant towards law-benders. Individualistic countries generally prefer to spare more lives. Ethic is dynamic, but coding is static. This is why no one algorithm should ever be created to make decisions for more than one population. The way I see it, at least three sets based on different worldviews should be made : West, East and South. trolley problem Put simply, if I get into a Chinese autonomous car, I’d like to be able to choose a Western standard in case of an accident. Principle of Accountability This principle may appear blasphemous for many free-market proponents, raised as they are in countries where tobacco groups do not cause cancer, distilleries do not cause alcoholism, guns do not cause school shootings and drug companies do not cause overdoses. Silicon Valley has understood this, and its go-to excuse when its products cause harm (unemployment, bias, deaths…) is to say that its technologies are value neutral, and that they are powerless to influence the nature of their implementation. That’s just an easy way out. Algorithms behaving unexpectedly are now a fact of life, and just as car makers must now be aware of emissions and European companies must protect their customers’ data, (as opposed to scientists, whose very is pushing barriers — and so it should be) must closely track an algorithm’s behavior as it changes over time and contexts, and when needed, mitigate malicious behavior, lest they face a hefty fine or prison time. tech executives raison d’être Can’t handle it? Don’t green-light it. ? Pay the fine. ? Go to prison. ? To The Hague with you. You green-lit a project which was ultimately biased against women You allowed an autonomous car to go on the road and a pedestrian died Your A.I committed a war crime If your signature is at the bottom of the page, you are accountable to the law. Principal of Net Positive Is A.I really worth it? Currently, even the simplest algorithm is unethical by its very nature : , , … none of this is sustainable, though the UK, Australia and the EU all have the environment named in their grand principles. Is it really worth it for minor pleasures and simplifications? Once, just once, it’d be good to have a bit of sanity in the discussion. mining, smelting, logistics, black box upon black box of trade secrets, data center resources modern slavery e-waste rubbish mountains in Ghana And by sanity I mean being able to see the whole damn supply chain or your algorithm isn’t entering production. Environmental issues cannot take the back seat any longer, even when discussing something as seemingly innocent as the digital world. Conclusion In the face of a limited technology and a plethora of potential uses, . This is however no reason not to have a conversation about its implementation, before the robots start doing the talking for us ( ). the benefits of A.I clearly outweigh the risks yes, this is a hyperbole, sue me : A.I is not something to be trusted or not trusted. It is merely a man-made tool which is “fed” data in order to automate certain tasks, at scale. Do you trust your washing-machine? Your calculator ( It is all-too-easy to assume the agency of something that has none. A.I cannot be good or evil. Humans are good or evil (and so often simultaneously both). At the end of the day, A.I merely holds a dark mirror to society, its triumphs and its inequalities. This, above all, is uncomfortable. It’s uncomfortable because we keep finding out that we’re the a-holes. Let me say it loudly for the people at the back Yeah, me neither. Math is black magic)? A.I Ethic does not exist. : Algorithms serve very specific purposes. They cannot stray from those purposes. What matters is whether or not a company decides that this purpose is worthy of being automated within a black box. As such, the question of A.I ethics should be rephrased as “ ” and, if yes, “ ” That’s trickier, isn’t it? But more realistic. Let me say it loudly for the people at the back do we trust (insert company’s name here)’s managers have our best interest at heart? do we trust the company’s programmers to implement that vision flawlessly while taking into account potential data flaws? A.I Ethic does not exist. : The vague checklists and principles, the powerless ethics officers and toothless advisory boards are there to save face, avoid change, and evade liability. If you come away with one lesson from this article, it is this : Let me say it loudly for the people at the back A.I Ethic does not exist. This article was originally written for Honeypot.io , Europe’s developer-focused job platform.