paint-brush
Living in the world of AI - The Human Transformationby@epsilon11
1,899 reads
1,899 reads

Living in the world of AI - The Human Transformation

by Raj SubramanianSeptember 4th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Raj Subramanian: Data is the truth behind everything from finding a cure for cancer to studying the shifting weather patterns. Despite this promise there is a global sense of mistrust, he says, despite this promise. He says companies have started using AI to build new products and make critical decisions related to humans. The use of AI by toxic technology with the use of toxic factors related to human factors is a problem, he argues. The trend is affecting social, cultural, cultural and ethical aspects of its influence.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Living in the world of AI - The Human Transformation
Raj Subramanian HackerNoon profile picture

Today, if you stop and ask anyone working in a technology company, “What is the one thing that would help them change the world or make them grow faster than anyone else in their field?” The answer would be Data. Yes, data is everything. Because data can essentially change, cure, fix, and support just about any problem. Data is the truth behind everything from finding a cure for cancer to studying the shifting weather patterns. 

However, despite this promise there is a global sense of mistrust. Outside of the tech, business and scientific worlds, people are asking, “What is data? How is my data being used? And how is this all relevant to my future and that of my community?”

We no longer live in a world where our personal information is private. From a family in a remote village in Africa to those who live in more connected cities, every person’s information is being used to make important decisions that affect human life in some shape or form.

To comb through trillions of data sets that currently exist and find patterns that would otherwise be difficult for the human mind to recognize, companies have started using Artificial Intelligence (AI).

AI is more than a buzzword, it is real and it is fundamental to thousands of software and hardware products that consumers use on a daily basis.

What is AI?

AI is the field of study that deals with building machines that can work, think, and react like human beings. Machine Learning (ML) and Deep Learning (DL) are subsets of AI and are often used synonymously these days. A Deep Neural Network is a byproduct of research in AI, and is based on the neural network system of the human body. A human brain contains as many neurons as there are stars in the Milky Way, in the ballpark of 100 billion! Each neuron is connected to thousands of others via junctions called synapses. The strength of the connections determines how the information is processed in the brain. 

The same concept has been transferred to the field of computer science. We give different datasets to an AI model, which is basically a mathematical function with some parameters (a good example is the weights in a neural network) and hyperparameters (a good example is the learning rate for training a neural network). Imitating the biological process, the inputs of all the artificial neurons, along with their weights, are passed through what is called an activation function to determine the next state. The multiple states that the data passes through help in training the AI model and figuring out different patterns. This in turn helps to build new products and make critical decisions related to humans.

Current State of AI

The AI we currently have is narrow AI or weak AI which does one specific task better than humans but fails in all other tasks due to lack of conscience and human emotions and intelligence.

For example, we have AI-based virtual assistants like Google Home and Alexa. They are good for one specific set of tasks: you give instructions to do something based on information on the internet and it responds with either “Here is the score for the Lakers game today” or “Sorry I cannot understand.” But the device cannot have a real conversation with you, debate with you intelligently, or console you when you are sad. These involve human emotions, which is difficult to implement in AI. The same holds true for those famous AI-based computers like IBM’s Deep Blue (which beat Gary Kasparov in chess) and Watson (which triumphed over two Jeopardy champions); AlphaGo which beat the world’s champion Go player; and any AI-based machine that you have heard about in the media: they are fast and clever, but not intuitive or emotional, or culturally sensitive.

A majority of the companies like Google, Facebook, Amazon, and Apple have started using AI in all aspects of their software and hardware. From categorizing top trending posts on Facebook with the use of AI models/algorithms, to the use of AI chips in phones to process images right there in real time, instead of the request going to servers for processing -- AI is everywhere, and we are just getting started.

Challenges of AI

While the world embraces the new products and cutting edge discoveries in the field of science and technology with the use of AI, a toxic byproduct of this trend is its influence on social, cultural, and ethical factors related to human beings. Do you recall such unsettling news as Google photos classifying African American people as Gorillas; Microsoft Tay, an AI bot that went rogue in less than a day; Beauty.ai classifying only white people as beautiful?

There are various domains that AI models have been using for several years now that have continued to have drastic effects on human beings. We are just not aware of it. Did you know AI models are used in police stations, prisons, universities, in scheduling systems, online personality tests and more, to make decisions which may drastically change people’s lives for the worse?  There are great examples of these models in the book “Weapons of Math Destruction”. For example, in prisons, inmates are required to fill out a questionnaire where they are asked such questions as “When was your first involvement with the police?” or “Do any of your family members or relatives have a history of criminal charges?” 

Using the answers to these questions, an AI-based risk assessment model generates a risk score. That score is then used to determine the sentencing of an inmate and also the chances of parole. 

These questions don’t take into consideration the fact that a person who grew up in an economically-depressed neighborhood has a higher probability of having encounters with the police or having acquaintances or relatives with past criminal history. The situation would be totally different for a person who lived in an upscale neighborhood throughout his/her life with little exposure to crime or violence. The AI models which generate the risk scores do not take these factors into consideration, and as a result an inmate could receive an undeserved longer sentence with a lower chance of parole. Most people who are affected by this situation are of African American or Hispanic descent.

AI models that are used by recruitment firms have been shown to suffer from racial bias as well. An experiment was conducted jointly by University of Chicago and MIT, where researchers responded to help wanted ads in Boston and Chicago newspapers with fictitious resumes from names that sounded very African American or very White. They found that white people had a 50% higher chance of getting callbacks than people of color, because the AI model made decisions based on the names of the applicants and their location.  

This is the world we live in where AI models are being used to make decisions about humans rather than humans using AI models as an aid to make informed decisions. We risk becoming slaves to these algorithms whether we know it or not.

Impact of AI on Software Development and Testing

A recent study by Gartner shows that by 2020, AI will be pervasive in almost all software products and services. The major highlight of the study was how our skills as engineers would have to adapt accordingly. The current roles in companies are going to change significantly and we need to be prepared for it.

The working of AI is a black box -- we do not control or understand how the algorithm forms different relationships and makes decisions -- we just provide different training datasets and monitor the learning/progress. We are trying to make predictions on future values based on learning from past examples, or trying to discover different patterns from datasets.

So, the behavior of the model boils down to two factors - 1) How diversified is your dataset? 2) How often do we evaluate the learnings of the AI model and observe how it makes decisions in the context of race, sex, religion, and culture?

This becomes all the more important when organizations use AI in building applications that are going to be used by people from all over the world. This includes apps related to sharing photos, messages, social media and anything that could have an impact on lives of people from various race, sex, religion and culture

The above being the case, one of the major roles of engineers in the future would be to ensure AI models used in applications have a more diversified dataset. They would need to be aware of how their training dataset could influence the AI model’s decisions and in turn could cause harm to human beings from a privacy, security, ethical, social, and cultural context. In fact, I would suggest requiring the entire development team to pledge an oath of commitment to take this issue seriously. Next, the validation dataset used to evaluate the model learning needs to have a good mix of diversity ingrained into it to measure how the model is making decisions. Finally, when we decide the AI model is ready for production, the test dataset needs to have some unseen datasets that the model has never seen before. This is to simulate a real life situation of an AI model making decisions when people from a different race, culture, or region interact with it. In all these phases, the role of an engineer is going to be crucial. Whether we realize it or not, each one of us are going to make a difference in people’s lives. 

Software Testing is another important aspect of the software development lifecycle (SDLC) that will be influenced by AI. First of all, with the help of AI; we will be able to connect our production apps to the testing cycle. This means, we can create tests based on actual flows done by the user in production. Also, the AI can observe and find repeated steps and cluster them to make reusable components in your tests. For Example – Login, Logout scenarios. So now we have scenarios that are actually created based on real production data instead of us assuming what the user will do in production. In this way, we also get good test coverage based on real data. Secondly, AI has its self healing mechanism that can proactively find issues in our application and fix it instead of us finding it late in the SDLC. This is one of the biggest advantages of using AI in our software development and testing pipeline. Finally, AI is going to help bridge the gap between technical and non-technical people. Frameworks, tools and utilities that are currently too complex for non-technical people to use, is going to get much easier with the use of AI. It will help to abstract the complexity and give an easier interface to the user to perform different actions. For example - Microsoft recently came out with their no-code AI capability platform called Microsoft AI Builder part of the Microsoft Power platform, that gives non-technical users to create complex business workflows without the need of having a technical degree. The future is heading towards more complex, smarter and easier solutions for not only customers but also the development team. Developers will be able to build better software faster to meet growing customer demands with the use of AI algorithms related to deep learning and natural language processing.

Will AI take over our jobs?

With all these discussions about AI and automation, a common question people have is

How will AI affect our jobs?

As already pointed out in this article, the current state of AI is “weak AI”; which means it is only good in doing one job it is trained for and cannot think and react like human beings; especially when there are multiple factors and tasks in hand. So, even if we work with AI based systems, the need for humans is always going to be important and valuable. We need humans to train the AI, evaluate the AI and ensure it meets the customer expectations in terms of privacy, security, ease of use and other factors that are important to stay competitive in today’s market. Also Gartner in their recent study found, about 2.3 million jobs would be created by 2020 and only 1.8 million jobs would be eliminated. So, contrary to popular belief, the outlook is not all “doom-and-gloom”; being a real human does have its advantages.

This being said, this does not mean we cannot take the necessary steps to sharpen our skill sets and be open to learning new technologies. Being curious, creative and thinking critically has been the essence of our DNA. This is what differentiates us from algorithms and machines. So, we need to keep up with this fast paced world where new technologies spring up every day. If we do not keep this in mind, we will become obsolete with or without the coming of AI.

What does the future of AI look like?

Automation is only going to make our lives easier. We can use AI to automate mundane and repetitive tasks, and use it to comb through thousands of datasets to find patterns quickly which would otherwise be hard and time consuming to do manually.

The ultimate goal of researchers is to figure out if we can achieve Artificial General Intelligence (AGI)

That is, building machines that can surpass human intelligence. In other words creating Strong AI. Some people say it will take about 300 years to reach this state, but some think we can achieve AGI by 2055. Again, no one knows the exact answer. Meanwhile, we believe groups are going to focus on solving the ethical, social, and cultural bias of AI models. Deeply Inclusive AI is going to be the next big thing. Imagine a world where we can give cultural context to AI. Google Home could automatically wish you a ‘Merry Christmas,’ knowing that you celebrate the holiday in your culture. Currently this is not possible because there is no cultural context with AI. Or, suppose you have a product used by customers in France; telling a story that relates their culture to your product is going to bring them closer to you. 

It will also be interesting to learn just how AI makes decisions. Recent studies of the “black box” problem have shown that this could be done by teaching AI to justify its reasoning and detect biases in AI-based systems in advance. In summary, the future looks bright and innovation is at its best. There is still more advancements to look forward to, that will have a significant impact on human life.