paint-brush
Enterprise AI Has Been Failing, Here’s How It Can Recoverby@gsinghviews
347 reads
347 reads

Enterprise AI Has Been Failing, Here’s How It Can Recover

by Gaurav SinghFebruary 3rd, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Despite advances in AI, the currently deployed neural networks are still very much black. The decision-making skills of supervised learning algorithms are only as good as the humans who label the underlying data. Even the successful ones opt not to commercialize AI because they don’t trust it.
featured image - Enterprise AI Has Been Failing, Here’s How It Can Recover
Gaurav Singh HackerNoon profile picture


Over the last decade, Artificial Intelligence (AI) has evolved into an all-purpose term for any accomplishments of computer algorithms that formerly required human reasoning and thought. Everything from AlphaGO defeating the then Go-champion, Lee Sedol, to autonomous vehicle testing on public roads, fell under the tent of the AI debate. The recent achievement in producing art through stable diffusion, which many label as a “theft of creativity,” has taken the globe by storm.


Given such widespread success, it’s no surprise that enterprises rushed to adopt AI in the hopes of gaining a competitive advantage. However, recent research from Fivetran has signaled that a significant majority have failed in that regard. The key result of the survey was that while 87 percent of enterprises believe AI is critical to their company’s survival, 86 percent do not trust AI to make business decisions without human input.


Why is that?


Before we delve into the reasons, let’s set the expectations right. We’re not dismissing AI because it isn’t what we expected it to be based on science fiction; we’re taking stock of the AI that already exists, performs well in demos, and is presented at conferences and summits, but isn’t being deployed for commercial purposes.


What could be going wrong here?


Let’s start with the cost of running AI systems. Over the past years, there have been phenomenal advancements in computing hardware, algorithms, and infrastructure services (such as Google TPU and Amazon AWS Lambda). As a result, setup and per-unit costs have been fundamentally lowered, thus decreasing the barriers to entry. However, the growing size of benchmark-setting neural network models is keeping AI expensive. Training and deploying larger models such as object detection in autonomous vehicles or Bert-like networks in NLP applications currently requires utilizing several Nvidia GPUs or $$$ in AWS instances.


It is also costly to the environment because it uses a lot of power and energy. Turns out, most of the large neural networks are designed for general purposes and thus are an overkill for specific applications at the enterprise level. Instead of throwing humongous networks, practitioners should employ domain knowledge to prune the network. Apart from using transfer learning to avoid retraining a large network, enterprises should also consider knowledge distillation techniques to come up with a slimmer network for their specific use case. Lastly, AI accelerators should be used for inference purposes as such devices are designed to run in real-time while consuming less power.


Another major reason for AI systems not meeting expectations is the low availability of data — both in terms of quantity and quality. In fact, the survey highlights that about 71% of organizations are struggling to find the data they need to run AI programs, workloads, and models. Furthermore, only a quarter of those that manage to obtain data are able to transform that data into actionable insights. Even the successful ones opt not to commercialize AI because they don’t trust it. The reason for this is that the decision-making skills of supervised learning algorithms, the commonly deployed forms of AI, are only as good as the humans who label the underlying data. That is, it inherits all of the labeler’s biases and preconceived notions. While the human is removed from the loop, i.e., time is saved, it does not instill much trust in decision-making.


In the last couple of years, there have been increased conversations on biases in historical data and ways to rectify them. But that may not be sufficient until neural networks become more transparent. Enterprises should actively look into the sources of biases that may have crept in and use data masking techniques to eliminate the effect of those in the final outcomes.


Despite advances in AI, the currently deployed neural networks are still very much black boxes. The explainability worsens the deeper the neural network gets. Since decisions that cannot be explained cannot be trusted, their commercial deployment is limited. As AI gets more complex, it’s important to have clear governance around things like explainability, fairness, and bias. This is a challenge for many companies because it’s hard to establish these guidelines when there is no agreed-upon standard. This gray area is where no large company’s executives would like to operate. It’s hard to get buy-in from decision-makers when they don’t really understand what AI does or how it works. And we ML practitioners are not making it easy.


The communication gap between ML engineers and the executive team makes matters worse. Since the KPIs are different for the two different teams, most AI projects don’t go beyond the pilot phase. The ML engineers are not effectively converting metrics such as accuracy, mean average precision, and latency into metrics such as revenue growth and cost savings, the KPIs executives care about. This has been a big source of my personal frustration throughout my years as a machine learning researcher. Building a layer of product managers between the developers and the executives to translate metrics into $$$ can improve the outcomes.


Simultaneously, the executives should participate in AI learning programs for business leaders. So that your engineers do not roll their eyes when you talk about the AI initiatives at your company. The impact of this issue will further subside as enterprises all around the world are striving to be at the forefront of digital transformation, resulting in the appointment of technology executives to the coveted CEO position.


The potential commercial benefits of AI should compel corporate executives to expedite the transition from isolated pilot initiatives to a comprehensive AI strategy. An AI-centric business strategy requires continual investment in cutting-edge AI research while ensuring that the benefits are shared throughout the organization. This can be accomplished by making created and acquired datasets available across the organization, investing in AI infrastructure and talent, and encouraging executives to become more AI-literate.



Also published here.

Featured image generated with stable diffusion.