paint-brush
An Invincible A.I.: Enhancing Self-Learning Processesby@riasgenius
535 reads
535 reads

An Invincible A.I.: Enhancing Self-Learning Processes

by Ria CheruvuSeptember 4th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The challenging task of fabricating a pristine environment to nurture a growing, powerful learning entity capable of translating data into insights is one that cannot be avoided. It is necessary to realize that automation forces us to lose control and management over overwhelming amounts of data that A.I. mines. The source of the above relevant question is due to the volatility of data, as definitions of bias and corruption transform in accordance with circumstance and environmental conditions.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - An Invincible A.I.: Enhancing Self-Learning Processes
Ria Cheruvu HackerNoon profile picture

The challenging task of fabricating a pristine environment to nurture a growing, powerful learning entity capable of translating data into insights is one that cannot be avoided. It is necessary to realize that automation forces us to lose control and management over overwhelming amounts of data that A.I. mines. The source of the above relevant question is due to the volatility of data, as definitions of bias and corruption transform in accordance with circumstance and environmental conditions.

We observe the same ramifications the question addresses in the upbringing of a human child, an entity capable of active participation in an environment, i.e. plucking out data from experiences and events to gradually mold their beliefs, judgements, predictions, etc. By training a child to believe 2 + 2 = 5 or that the Earth is flat, we can morph their perception to produce adverse results (e.g. believing stereotypes). However, the solution to determining right from wrong, and true from fake, stems in logical exploration of the context and ramifications.

As stated in the article “Is Artificial Intelligence Prejudiced?” (https://iq.intel.com/is-artificial-intelligence-prejudiced/), bias is implicitly added to results when A.I. is exposed only to a portion of the data. The solution to this vulnerability, as discussed in the article, is to make the AI explain its approach and conclusions and ask for more evidence. Therefore, I think the key to limiting vulnerabilities of A.I. to false data is recognizing and deriving logical deductions/conclusions to process the implications of false data. Imagine a sudden time lapse where A.I., confronted with possible biased data, could look into the future consequences of data through enhanced computational power that purports false information and observe whether that environment’s characteristics are concurrent with our current world and ideals.

Exposing Artificial Intelligence to irrelevant, outdated, biased data allows algorithms to take advantage of “noisy signals” and question the evidence. If we were to present a self-learning A.I. with a dataset of incorrect mathematical calculations or medical diagnoses incorrectly self-reported by individuals, exploring the context and consequences of a plausibly false scenario through experimentation (similar to the learning patterns of a human) proves to be a powerful solution: For example, if individuals self-report they have a certain disease, which they diagnosed according to the appearance of certain symptoms, A.I. should be able to examine the correlations between the disease and symptoms, and scenarios addressing the ramifications of and potential factors that might influence the analysis.

I think there are a few key elements that A.I. needs to use, as a self-learning entity (similar to human beings), for cyber security and to protect against manipulation/bias/fallacies associated with unknown sources and unconventional data. The inspiration for these elements comes from a psychological perspective of human behavior and the theory that noise/false/irrelevant data can improve the performance of natural neural networks. Here, I’m considering the definition of false data to range from un-prepared data to data containing biases/stereotypes.

1. Contextual awareness: Understanding similarities and differences between scenarios. False data can be used to derive incorrect biases and predictions. For example, an A.I. algorithm might predict it is dangerous to use an X medical operation based on the data “30/100 people are dead after X medical operation”, without taking into account that 70/100 people are alive after X operation. The importance of contextual awareness for A.I. is reflected by the high likelihood of this issue occurring for A.I.’s human intelligence counterpart: Framing causes us to shed value on one scenario vs. the other; for example, we are more likely to use energy conservation methods if we are told that we will lose $500 if we do not use the methods than if we are told that we will gain the same amount by using the methods. Context-aware hierarchical recurrent/feed-forward neural networks can be implemented for tasks such as tracing the spread of false news, in recommender systems, applications such as Natural Language Processing, etc.

2. Avoiding Erroneous Pattern Recognition/Association: One of the most common sources of bias that occurs when an A.I. algorithm processes data is finding nonexistent patterns in data that is in fact caused by random fluctuations; in other words, correlation is not causation. In the initial development and training of ML algorithms, the goal is to develop a model that has sufficient predictive capability and/or is capable of finding patterns in data. Therefore, we can observe a clear disadvantage associated with one of neural networks’ greatest strengths (namely, strong pattern recognition skills). This is why Deep Neural Networks can be easily fooled to make high confidence predictions that an object in an image is recognizable (which in reality, is not unrecognizable), as discovered in the paper “Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images” (http://www.evolvingai.org/fooling). A meaningful example of the implications of A.I. being subject to erroneous associations are for data that contain stereotypes: A high correlation between “female” and “receptionist” word embeddings caused an A.I. algorithm designed to identify potential job candidates to form an association, and it presented an increase in female resumes for stereotypical roles (Example taken from the “Is Artificial Intelligence Prejudiced?” article). The key to avoiding these issues is to perform alternative testing on assumptions-walking through the evidence and associated logical deductions, in addition to exploring alternative hypothesis/factors that will influence the associations: For example, believing a basketball player has a higher likelihood of making the next shot if she has hit a few shots in a row is incorrect. The A.I. should be capable of understanding that there is an equal likelihood the player will not/will make the shot, and should therefore be able to approach theories that support this “hot hand” theory skeptically. Similarly, the A.I. should also be able to adapt to the volatile data by updating its model and perspective of the environment in relation to its successful or wrong predictions of patterns.

3. Approaching False Data Availability with Caution: As I’ve talked about in this post, providing more evidence/data (in addition to the false data) can help A.I. form correct assumptions and limit its vulnerabilities to false data. A higher frequency/availability of false/outdated/corrupt data is a venue that might expose the vulnerabilities of A.I. neural networks. As an example (taken from the “Is Artificial Intelligence Prejudiced?” article), an app-based program allowing users to report potholes in the City of Boston collected data that was inputted into an A.I. technology. The A.I. erroneously predicted more potholes in upper middle-income neighborhoods due to frequent reports of potholes and frequent use of smartphones. Therefore, frequent exposure to available false data can cause bias and adverse results in terms of predictions. If we train A.I. on datasets, which contains more frequent cases of death by natural disasters compared to medical issues (e.g. heart attack), to predict the probability of death through a certain method, we can see how an A.I. vulnerable to outdated/false data might predict a higher likelihood of death by natural disaster. A solution by which we can prevent this issue is to train automated A.I. to understand the source of the fake data and utilize more “representative” data offering a current, real-world perspective to eliminate assessment of higher/lower likelihoods.

4. Finding a balance between automation and rules: As discussed in this post, we can think of two primary ways Artificial Intelligence’s vulnerabilities to false data can be exposed: Prejudice incorporated by the human programmer, or assumptions made by A.I. during the learning process. This final element attempts to solve the issue that when information being fed to A.I. is biased, the A.I. will be biased as well. Perhaps, a solution can be found by balancing between automation and pre-programmed bias, as the rules of thumb an A.I. is set to follow can lead to systematic bias and vulnerabilities. As an example, a fallacy in human judgement (that can be mirrored in A.I.) is that we start with an anchor-a starting point for a thought process-and tend to adjust our thinking accordingly. If we input or expose prejudiced data to an A.I., it will start to form decisions around that prejudice and apply that mindset/perspective to its predictions, analysis of human behaviors, and actions. It is the responsibility of the programmer to determine how much we should influence the A.I.’s learning process through these “anchors”, but a critical solution for limiting A.I.’s vulnerabilities to false data is to reduce the amount of involvement/potential input of prejudice by the programmers that might produce adverse results.

These elements serve as guidelines for A.I. mechanics for the creation of secure A.I. technologies, capable of mining and processing false data without adverse results. The elements can be incorporated into governance for A.I. to enhance cybersecurity and attempt to demystify the black box of these powerful technologies.

Therefore, regardless of the biased, flawed, and corrupted data A.I. might take as input due to weak cyber security or prejudices of the human programmers, we can limit the vulnerabilities of A.I. to such data by exposing the algorithms to the overall context (encompassing false and current, real data). Another solution is to incorporate (in A.I.) adaptation to the volatile data by updating models and perspectives of the environment in relation to successful or wrong predictions, as mentioned above.

The ultimate question, however, deals with the definition of “false” data. As I mentioned above, it could mean data that is unprepared/outdated or data that is biased/stereotypical. The evolution of A.I. in response to a rich environment ripe with volatile data to mine is essential.

I think that if we want to incorporate the ultimate intuitive, inherent understanding of false vs. real in terms of building A.I. to participate as an active agent in immersive environments, we should take inspiration from human behavior in relation to dreams. We are fed “false” data/environmental signals that morph our perception during dreaming, similar to The Matrix. In particular, we should model and mirror the realization associated with the discovery that the fabricated dream construct is not real for developing the perspective A.I. needs to determine false data/scenarios/implications or a false environment from the truth.

I agree that allowing a greater “flow of data” will help limit A.I.’s vulnerabilities to false data, in addition to the equally important implementation of “critical thinking” skills. Increased and collaborative human involvement would definitely improve the representativeness of the data A.I. could efficiently filter. Consequently, there seems to be a gap between the reliability/accuracy and completeness of data; this issue is reflected in A.I. and Machine Learning where a consistent “flow of data” needs to integrate both accuracy and completeness.

One idea/guideline for policymakers faced with the issue of shaping A.I.’s anchor would be to limit human involvement in the A.I.’s self-learning processes, and help establish guidelines/techniques A.I. practitioners can use to build technologies that construct anchors by updating models and perspectives of the environment in relation to successful or wrong predictions. Consequently, I think that anchors should be constructed by the A.I. as a result of environmental factors, the participation of individuals and objects, and specific needs/optimal practices created as a result of collaboration between policymakers, machine learning and A.I. practitioners, and educational organizations and institutions. The data elements should be analyzed by the A.I. to derive logical deductions/conclusions to process the implications of false data and explore the context and ramifications. For example, in the application of a childcare robot powered by A.I., a potential anchor claiming that “Children must not be harmed” programmed by policymakers, childcare and health organizations and institutions, etc. will certainly not apply to every scenario. It is expected that the A.I. robot should be capable of understanding the context and the conclusions/assumptions made by the policymakers and childcare organizations; i.e. the robot should not restrain the child from learning how to walk since he/she experiences temporary harm while stumbling and falling.

Consequently, I think the importance of incorporating “critical thinking” into A.I. is emphasized by the theory that the accuracy (and other characteristics) of the flow of data is only one part of the problem and certainly cannot be taken for granted. It is critical to address the methods by which a false subset of the data can be identifiable by A.I.: How can an optimized machine learning algorithm differentiate from a human being versus a hacker technology posing as a reliable data source in a data mining environment encouraging inclusion of contributors? In this scenario, we can postulate that we do not have control over the A.I.’s self-generation of the “anchors”/rules of thumb based on environmental parameters in applications such as false news detection or healthcare, where there is a potential lack of knowledge regarding the availability/completeness, reliability of the source, and accuracy of the data, and the anchors and volatile definitions of “false” data are altered in relation to the application, context, and specific needs. It follows that there is a relevant concern in terms of the governance of A.I. associated with leaving aspects of the functionality of the self-learning processes to the black-box nature of A.I., and incidents such as the transformation of Microsoft’s chatbot into a racist, stereotypical troll after interactions with Twitter users (https://www.independent.co.uk/life-style/gadgets-and-tech/news/ai-robots-artificial-intelligence-racism-sexism-prejudice-bias-language-learn-from-humans-a7683161.html) further demonstrate the need to address this concern.

Emerging sources of unconventional data challenge A.I.’s ability to operate in complex environments and demonstrate that A.I. must be able to understand and avoid biases at both higher and lower levels of abstraction more efficiently compared to humans and our associated cognitive biases. Furthermore, the issue seems to encompass both the characteristics of data (accuracy, completeness, etc.) along with the equally important task of filtering/processing data. Therefore, the amount of human involvement in self-learning processes should be carefully balanced as we continue to develop increasingly complicated A.I. systems capable of mining and filtering big data with its respective dimensions — volume, velocity, variety, veracity, and value.

Thank you for reading!

Citation Note: Some examples are taken from Richard H. Thaler and Cass R. Sunstein’s novel “Nudge: Improving Decisions About Health, Wealth, and Happiness” to illustrate critical real-world biases/fallacies associated with the human mindset from a psychological perspective. I’ve adapted these examples of biases and their solutions to issues that are and will be mirrored in A.I. technologies.

Originally published at demystifymachinelearning.wordpress.com on August 29, 2018.