As we enter the 2020s, it is interesting to look back at how life has changed over the last decade. Compared to your life in 2010, most of you reading this probably use a lot more social media, watch more streaming video, do more shopping online and, in general, are “more digital”. Of course, this is as a result of the continued development in connectivity (4G becoming prominent, with 5G on the horizon), the capability of mobile devices, and lastly, the quiet and transparent adoption of machine learning, a form of artificial intelligence, in the services that you consume.
When you shop online, for example, you are getting AI powered recommendations that make your shopping experience more pleasant and relevant. And over the last decade, many of you will have interacted with a “chatbot”, a form of AI, which hopefully answered a query of yours or helped you in some way. The difference of a decade is basically the chatbot not seeming that amazing anymore...
The term “Artificial Intelligence” was coined in the 1950s, by John McCarthy, a now famous computer scientist. When you think of artificial intelligence, you may think of HAL 9000, or the Terminator, or some other representation of it from popular culture. It wouldn’t be your fault if you did, however, as, ever since the concept came about, it was an easy fit for Sci-Fi movies, especially ones that made AI the bad guy. If John McCarthy and his colleagues simply termed the area of study “automation”, or something equally less imaginative, we probably wouldn’t have this association today.
An AI like HAL 9000, the sentient computer from the movie 2001, would be considered an “Artificial General Intelligence”, one that has a general knowledge across many topics, much like a human, and can bring all of that together to almost “think”. This is opposed to a “Narrow AI” which would have a narrow specialization - an example would be building a regression model to predict the probability of diabetes in a patient, given a few other key health and descriptive indicators. Technically you could consider this automated mathematics and stats, as the algorithms have been known for more than 100 years, but the progress is building now because the data and the computing power are more available
and affordable.
The AI of today is nowhere close to being an “AGI” though. Instead, the AI projects that we see being worked on are most likely “Narrow AI” Machine Learning projects. That means that we’re safe from any Terminator (for now). However, if we don’t consider ethics as part of the AI creation and development process, even for Narrow AI, we could still unleash tremendous harm on society, even sometimes without realising it.
The broad spectrum of ethical impact from the actions of artificial intelligence range from a simple ML regression model not wanting to recommend loans to a particular demographic, all the way to a AGI being given the power to do tremendous physical damage, either intentionally or unintentionally.
Further pressure on the need for the ethics debate is the pace of AI adoption, that is, over the last decade, it went very quickly from being spoken of to being real. Consider the advancements in conversational speech, image recognition and deep learning that would have been science fiction in 2010 ?
In late 2019, Microsoft data scientist Buck Woody visited South Africa, to do a few events here. I asked Buck, while doing a video interview with him, on where AI adoption was going next. His answer was- “it will become transparent”, meaning that very soon, it will feel normal to have machine learning everywhere, all processes in business or our personal life having predictive capability and optimization, not just automation. This is very similar to what happened with microcontrollers and software in the late 20th Century. We expected everything from our cars to appliances to have onboard computers or at least logic circuits, and the improved experience from the device being due to decisions being made in some form of software. The improvement very quickly became the norm. Expect a similar on-ramp of AI, as more pieces of our lives become “AI
powered”.
Given the expected adoption. experts are now saying that we should build AI more cautiously, and it is extremely important that we build with ethics in mind, from day one.
When we had the microcomputer revolution in the 1970s , was ethics considered ? Perhaps not, there was nothing to link ethics to processing of data, even though the software systems built on top of those needed to consider ethics. When we had the internet revolution of the 1990s , and the mobile phone revolution of the 2000s, the question wasn't explicitly
being asked either. But perhaps it should have been, given the issues that we now see around privacy and social media influence on major events.
The public trust in AI, if ever lost, will be very difficult to regain. With this
in mind, Satya Nadella, CEO of Microsoft, proposed in June 2016, five
principles to guide AI design. There are :
So what are the dangers of AI that we need to protect society against ? As you might have imagined, let's not focus on “doomsday” scenarios with AGI’s running amok - firstly, we are nowhere near that possibility in 2020. Secondly, and more importantly, there is a tremendous amount of harm that can be done to society just with the incorrect implementation of “narrow” AI’s, as explored in the following sections.
Bias
The most prominent danger of AI, that is highlighted fairly often, is the issue of bias, the danger that AI systems may not treat everyone in a fair and balanced manner. For example, when AI systems provide guidance on medical treatment, loan applications or employment, they should make the same recommendations for everyone with similar symptoms, financial circumstances or professional qualifications.
Theoretically, since AI systems take data and look for actual patterns and facts, the outputs should be perfectly accurate. However, the problem is that todays AI systems are being designed by humans, and there are two ways bias can creep in :
Lets use the example of a system designed to help HR recruit software developers. In the current world, we may have a situation that most software developers are male, even though we would like to change that to a more equal distribution. An AI system, however, may be trained on current data and include bias towards males in its recommendations.
Ethical AI requires that anyone developing AI systems be aware of the above issues, and take steps to ensure that both accurate, neutral data be used as inputs, as well as using methods to eliminate other bias in the creation process. Techniques like peer reviews and statistical data assessments will be needed. Be aware that the above isn’t the end of it, though. It has been found in multiple cases that AI sometimes allows some form of bias to creep back in over time, so post deployment the system will have to be continuously monitored. This is where “MLOps” comes in, an emerging field that is focused on the lifecycle of model development and usage, and in particular, aspects of machine learning model deployment. MLOps will have to include bias detection as part of model degradation analysis, to deliver ethical AI.
Safety and Reliability
While the majority of AI systems today involve analysing data and making a prediction or defining a trend, this will change rapidly in the early years of the decade. As the outputs of AI are increasingly used to effect the physical world, a focus on safety and reliability becomes critical. This is similar to the journey the software world went through a few decades back.
A clear example of this would be autonomous driving systems. In recent years, one manufacturer of cars has offered an AI powered autonomous driving option on their vehicles, however a spate of accidents brought the technology into question. This is where the ethical design of AI is non-debatable - even one more death in an accident ( if the system did have a flaw ) is unacceptable.
From an ethics perspective the questions are - was the system ready for release ? Should the system be pulled from production even if the accident rate is low ( given the risk ) ? Sometimes the ethical questions could come post design / engineering and after production.
Privacy
If there’s a hot topic in the tech world at the moment - it would be privacy. In recent years laws have been put in place globally to ensure that the personal information of individuals is protected and not collected unscrupulously. While this is preferable, it is something of a double edged sword for the world of AI - as you know, AI needs data, and lots of it, to be as accurate and useful as possible.
A particular quandary would be if society needed to collect private information from individuals, in order to deliver an AI system that serves society in some form - where would the world draw the line ? This is exactly what happened early in 2020 with the COVID-19 pandemic. In order to measure whether social distancing, critical to stopping the spread of the virus, was indeed taking place at an acceptable rate, various social distancing applications entered the market. Of course, these would collect a lot of information on an individuals cellphone, including precise location information and history. Many chose not to install these applications, also known as contact-tracing apps, however governments, desperate for this information to see if social distancing measures were in fact working, were conflicted about them.
If the issues around data privacy aren’t sorted out in the current timeframe, the danger is that the public trust would be lost, and both individuals and organisations may not be inclined to share data that is needed to build the useful AI systems of the future.
Even if there is future agreement on the collection of data for use in AI, ethical AI will demand frameworks in place on how that data is used, as well as transparency. There are already techniques being used by companies like Microsoft to protect the privacy of individuals where their data is being used, such as differential privacy, homomorphic encryption, and many others.
Transparency
As AI systems become more common place and make increasingly important decisions that impact society, it is critical that we understand how those decisions are made. The current consensus is that an AI system should provide clarity on all aspects of its creation, from the data used in training through to the algorithms itself.
To simplify, the question that should always be asked is - “Do I really understand why this particular model is predicting the way it is?”. Even experts can be fooled by a model with an inbuilt flaw.
There have been developments in this space however. One such is the concept of “model interpretability”, which is now being serviced via frameworks like InterpretML ( which is being referenced by most ML platforms including Azure ML). Model Interpretability allows data scientists to explain their models to stakeholders, confirm regulatory compliance and allow further fine tuning and debugging.
Figure 1. Model Interpretability in Azure ML
There is also the Fairlearn toolkit, being referenced by tools like Azure ML now, which would allow you determine the overall fairness of a model. This is quite useful. To revisit the examples earlier, a model for granting loans or for hiring could be referenced for fairness across gender or other categories. This isn’t really optional anymore - in many industries, regulators are now asking to see proof that these best practices were followed in building these models.
It is also important that we show accountability - the people who design and deploy AI systems must be accountable for how their systems operate. While AI vendors can provide guidance on model transparency, accountability needs to come from within - organizations will need to create internal mechanisms to ensure this accountability.
An example of how vendors are trying to bring all of this together is the Responsible ML initiative. The goal here is to empower data scientists and developers to understand ML models, protect people and their data, and control the end-to-end ML process.
Figure 2. Responsible ML
This brings together the technologies mentioned like InterpretML to create a framework that can help organizations.
Lastly, we have to consider ethics outside of model development, but in model usage. Those who use AI systems should be transparent about when, why, and how they choose to deploy these systems. Consider the impact of AI to the jobs market - it is clear at this stage that over the next decade increased automation, aided by artificial intelligence technologies, will impact the job market. We are already starting to see visible examples of this - in 2019 a famous restaurant chain implemented a system where a customer could walk into a store and place an order at an automated kiosk. Technologies like speech recognition aid automation making such scenarios possible. The ethics decisions will be - should a company implement such ai-assisted automation at the cost of jobs, especially where the economic benefits aren’t clear ?
Luckily, it is also expected that AI will create jobs as well. Some examples include data scientists and robotic engineers, and perhaps roles that we cannot imagine yet. In fact, AI will probably reduce the number of low-value, repetitive, and, in many cases, dangerous tasks. This will provide the opportunities for millions of workers to do more productive and satisfying work, higher up on the value chain, as long as governments and institutions invest in their workers education and training.
Summary
We live in exciting times, as this generation will be the first to witness AI will playing a greater role in our daily lives. As mentioned, technologies like
speech and face recognition were sci-fi ten years ago, and its exciting to
imagine things, that seem sci-fi today, which will be real and common place by 2030. Imagine working remotely via a Hololens device, meeting with people all around the world, with your speech being translated instantly for people on the other side, while your AI assistant tracks the meeting progress in the background and send outs notes. This is the tip of the iceberg, and yet, this reality will be threatened if the trust in AI is lost - all the issues like privacy and security need to be proven to have been sorted out before large scale adoption like this happens.
There is still enough time for this to be achieved though, with enough cooperation between organizations and governments.
Going forward, AI will only become more complex. Just as with the journey with software, if design patterns, standards, and methodologies enabling development aren’t laid out early on, we may see many failed projects, which would stall innovation, however the risks of not considering ethics as well will be more catastrophic. Everyone involved in AI in any way has a role to play, to make sure that this exciting frontier is implemented with ethics in mind, so that we reap as many of the benefits as possible as a society.
Thavash Govender is the Data and AI Strategic Lead at Microsoft, South Africa.