paint-brush
"We Know About AI's Ability To Remember, But Forget About Its Ability To Forget." - Valeria Sadovykhby@anastasia-chernikova
454 reads
454 reads

"We Know About AI's Ability To Remember, But Forget About Its Ability To Forget." - Valeria Sadovykh

by Anastasia ChernikovaAugust 19th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Valeria Sadovykh is a leading expert in the decision making and decision intelligence aspects of AI. Valeria holds a Ph.D. from the University of Auckland Business School and has over 10 years of experience with PwC in New Zealand, Singapore, and the US. The success rate of a prediction depends on the quantity and quality of the data available so that a data scientist can run successful algorithms. We need to empower AI through continued model optimization and self-learning, she says.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - "We Know About AI's Ability To Remember, But Forget About Its Ability To Forget." - Valeria Sadovykh
Anastasia Chernikova HackerNoon profile picture

As our world approaches the time where artificial intelligence becomes as widespread as electricity, we sat down with Valeria Sadovykh, a leading expert in the decision making and decision intelligence aspects of AI. Valeria holds a Ph.D. from the University of Auckland Business School and has over 10 years of experience focusing her efforts on emerging technologies with PwC in New Zealand, Singapore, and the US.

The future will be focused on collaboration between humans and machines. In your recent piece on VentureBeat, you stated that AI needs human input to predict crises and crucial decisions. On the other hand, you say that it’s nearly impossible to predict an important event because they are all unique. Do you think it’s ever going to be possible to predict the next pandemic like the current coronavirus? How exactly can we train AI for that?

Factually speaking, historical data is the key variable for predicting any events, whether it’s a pandemic, financial crisis, or natural disaster. The success rate of a prediction depends on the quantity and quality of the data available so that a data scientist can run successful algorithms. However, if we want to predict a black swan, we need to have a lot more pandemics and crises around us to get better at prediction. Thankfully, such events are sparse.

Moreover, to learn from the historical data, we need to generalize rules for the future. AI systems should be proficiently analyzing millions of what-if scenarios to answer and react on in the future.

We also need to empower AI through continued model optimization and self-learning. There are more than 2.5 quintillion bytes of data created daily, and that pace is only accelerating. Solutions like automation in machine learning (AutoML) platforms can automate the ingestion of a variety of data formats, running several algorithms on the same data set, and selecting the best algorithms for decision-making models. ML will never stop learning with continuous optimization of hyperparameters, random and grid search.

We all know about AI’s ability to remember, but fail to consider its ability to forget the data. During COVID, the AI was partly successful in building models on tactical issues, such as prediction and recognition of symptoms, as well as contact tracing and tracking how the disease spread by reading various data points and other preventive measures. However, it failed at predicting cognitive human behavior, reactions, and its impact on daily decision-making. The irrational behavior of stocking up on toilet paper and Clorox had not been foreseen by either AI or traditional forecasting techniques. This historical experience has been collected in various data points, however, this doesn’t mean that every time we face a new viral disease, AI should push us to the pandemic shopping list.

As we are empowering AI, we should make sure it serves for the good. What are the main principles people should follow while working on AI? Do you think there should be an obligatory course on ethics for software engineers as well as scientists, similar to those that doctors and journalists have at universities? What should be included?

The AI systems are designed by qualified and intelligent individuals, but it still becomes a victim of biases, because it’s led by humans. An algorithm that was built by someone without proper ethical training and then used by likewise users without established controls and governance could produce serious errors and threats to human privacy, dignity, safety, and even create additional discrimination.

We need to educate our society on responsible AI and ethics concerns. Not many people understand the impact of the recent push of face tracking and video surveillance tools. This is the job of our scientists, governments, and policymakers. There are initiatives by government agencies, for example, AI.gov, as well as by the World Economic Forum that proposed an AI Governance Framework and Assessment guide. The tech giants, such as GoogleMicrosoft, and IBM are also proposing their own Responsible AI principles on the development and usage of AI. However, currently, these policies serve as recommendations rather than regulations.   

We should also think about the consequences of excessive emerging tech usage in daily life and the impact of it on the cognitive thinking processes. While using predictable mechanisms and recommender systems created by AI, brain cell activation is actually reduced. 

I would also like to bring attention to the recent hype of the “working from home” trend. I’m not sure organizations, individuals, and the government have realized the impact of it and how it invades people’s privacy. We need to ensure that AI mechanisms are protective and take into account privacy, security, consent, access, transparency, explainability, etc. This requires a serious AI culture makeover, and we need to make sure AI is not only protecting shareholders but also us - average citizens.

Where is the fine line between empowering AI and making it lead our decisions? Can you imagine a situation where the machine gets power over humanity? 

In the late 2000s, when the tech became affordable to the developing world,  people started to handover their decision giving power to machines without even realizing it. As part of my academic research, I’ve been studying how emerging tech impacts human decision-making through the longitudinal convergent observation of online communities for over 10 years. One of the main findings was the evidence that people outsource their decision-making process to the online wisdom of crowds, which is a heavily machine-manipulated environment. Specifically, people bypass their design thinking phase and go straight to the “choice” phase by selecting already available choices that are presented in an online environment for them. Automation of recruitment, credit card applications, loan approvals, robo-advisers, and health checking processes are some examples of machines with applied biases making decisions for us.

For those who doubt the age of spiritual machines, Moore's law, a prediction model created by Intel co-founder, Gordon Moore, has been already proved accurate for several decades and used by many conglomerates to guide long term strategy planning. One thing that AI is not fully capable of yet is capturing our emotions, brain and mind. Our thinking contradicts ourselves, so it’s hard to build a statistical model. On the other hand, cases such as AlphaGo beating humans at the game Go shows that humanity can already be defeated by AI, at least in strategic games. 

We already see how authoritarian governments use technology -  facial recognition, for instance - to keep track of people no matter whether they give permission or not. How can we make sure people in power don’t take advantage of it to the detriment of others?

All the technology, including facial recognition, social tracking and activities tracing, are fantastic innovations and have been designed with good intentions. However, they put pressure on privacy and human freedom. This goes back to the educational and policy initiatives which we touched on before. How informed is our society on the usage of those technologies? Have we been transparent on what data we are collecting from each individual? The answer is no. First and foremost, it is self and institutional education that is currently not in place. 

Tesla CEO Elon Musk admitted that “excessive automation” at his company was a mistake, saying that technology can’t really remove humans. Companies tend to eliminate humans because robots are more consistent and cost less, which eventually can lead to overbalance and lack of critical thinking. How should a leader decide where they replace a human with or robot and where not?

Many organizations face cost pressure and require innovation to bring more efficiency into their operation. Intelligent Automation is not a radical loss of a job that brings with it less critical thinking. It is actually the opposite -- a loss of mundane tasks and the creation of more critical thinking. Industries are creating higher-value jobs where more intellect will be required and new skills will be necessary. The collaborative workplace between humans and machines will be beneficial in the long term. It’s widely discussed, and many educational institutions and companies are proposing workforce transformation roadmaps. In addition to creating new tech jobs, indicators highlight the need of “soft skills” and “creativity” — skills for problem-solving and working in a team.

The danger is within the speed of change and the ability of the workforce to transform fast enough to meet the requirements of automation. The question is whether we can reskill our accountants, auditors and lawyers to become creative data scientists with soft skills within the next few years. Universities already report that by the time students graduate, their job might be replaced by robots or be obsolete.  

For instance, The Institute for the Future has predicted that 85 percent of the jobs that today’s students will do in 2030 don’t exist yet. There is a need to create an educational and workforce ecosystem where people are constantly being educated and retooled to stay relevant.

What fascinates you most about this technology? Where would you like to apply your knowledge most?

I am mostly fascinated by the human aspect of AI - how it changes our way of living and impacts our decision-making processes. I am a big advocate of technology being used for humanity and social good. I believe we now have an abundance of tech advances that should be utilized to serve our humanity. 

Journalists point out many “negative” aspects of AI, however, it's humans who make it work that way. The recent pandemic has exposed fundamental weaknesses in the tech system. It has shown how AI has not been utilized for the prevalence of poverty, weak health systems, lack of education, unstable government support and other global problems. Crises force us to embrace tech advances in renewable energy, education, healthcare, green technology and sustainable new sectors that put humanity on a fast-track path to achieving sustainable goals.

The question of whether people will ultimately begin to accept this intrusion into their lives, in return for the increased security and convenience is widely discussed in both the private and public sector. Do you think there will be an increasing trend of people getting out of their devices, canceling social networks profiles, etc, in order to keep their lives private? Will we appreciate our anti-digital world more?

We can talk about the concept of neo-luddism — there are people who resist usage of any modern technologies, but we can't really slow down technology's acceleration. One of their arguments is that technologies should be proven safe before adoption, due to the unknown effects that new technologies might inspire. Here again, we come back to the question of policies, education, responsible AI and safety regulations of the AI usage.

This pandemic has also shown how powerful they are - during the lockdown, it was the only way to connect with our friends, coworkers and family. Quitting online social media would be impossible now. However, we can change the way we rely on the platforms in our real life. 

In your research, you’ve focused on the healthcare system. How can AI improve it here in the US? What are the main spheres you foresee being the most disrupted/influenced by AI in the near future? 

The US Healthcare system acutely lacks a unification of diverse data sources. Patient Data is not matched with Physicians Data which doctors hold at hand, and it is not matched with Pharmacy Data or Provider Data, which hospitals look at. Insurance data is also barely matched with any of those. There are severe limitations to implementing holistic AI solutions that impact a patient's health.  If all of the above systems were connected, a patient who has suffered an incident could be treated better. It’s a complex problem.  

Moreover, a patient who is trying to recover from a life threatening ailment needs to be guided to live their life well, with the right medication, delivered at the right time, after performing the right exercise before going to bed at the right time. So the healthcare system needs to manage the patient’s lifestyle, too. I agree with Vas Bhandarkar,  ScoreData’s CEO, who says that a good healthcare system should be able to access this centrally accessible normalized data to deliver all manner of  therapeutic, preventive, and curative treatments. Companies like ScoreData are solving the predictive patient behavior problem by computing propensities for readmission and delivering a sequence of nudges that help the patient take medicines on time, exercise at the right time of day, eat well, and rest well.

AI has penetrated our daily lives. Banks, health insurance, and consumer companies are all using the technology to improve their services and increase revenues. To me, the most fascinating aspect of AI is coming from its integration with neuroscience and psychology. AI will be most impactful in introducing new ways of interacting with machines, by using robots or building human-level general AI (AGI). This is when a computerized system exhibits abilities similar to those of the human mind. However, currently most of the disruptions are happening around deep learning — the use of computing power, which is trying to detect patterns in mountains of seemingly unrelated data.