paint-brush
AI Has Not One, Not Two, but Many Centralization Problemsby@jrodthoughts
2,231 reads
2,231 reads

AI Has Not One, Not Two, but Many Centralization Problems

by Jesus RodriguezJune 26th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

A couple of months ago, I wrote a three-part series of the decentralization of artificial intelligence(AI). In that essay, I tried to cover the main elements that justify the movement of decentralized AI ranging from <a href="https://medium.com/datadriveninvestor/why-decentralized-ai-matters-part-i-economics-and-enablers-5576aeeb43d1" target="_blank">economic factors</a> to <a href="https://medium.com/datadriveninvestor/why-decentralized-ai-matters-part-ii-technological-enablers-a67e3115312e" target="_blank">technology enablers</a> as well as <a href="https://medium.com/datadriveninvestor/why-decentralized-ai-matters-part-iii-technologies-930c3c9d10d" target="_blank">the first generation of technologies that are developing decentralized AI platforms</a>. The arguments made in my essay were fundamentally theoretical because, as we all know, the fact remains that AI today is completely centralized. However, as I work more in real world AI problems, I am starting to realize that centralization is an aspect that is constantly hindering the progress of AI solutions. Furthermore, we should start seeing centralization in AI as a single problem but as many different challenges that surface at different stages of the lifecycle of an AI solution. Today, I would like to explore that idea in more detail.

Coin Mentioned

Mention Thumbnail
featured image - AI Has Not One, Not Two, but Many Centralization Problems
Jesus Rodriguez HackerNoon profile picture

A couple of months ago, I wrote a three-part series of the decentralization of artificial intelligence(AI). In that essay, I tried to cover the main elements that justify the movement of decentralized AI ranging from economic factors to technology enablers as well as the first generation of technologies that are developing decentralized AI platforms. The arguments made in my essay were fundamentally theoretical because, as we all know, the fact remains that AI today is completely centralized. However, as I work more in real world AI problems, I am starting to realize that centralization is an aspect that is constantly hindering the progress of AI solutions. Furthermore, we should start seeing centralization in AI as a single problem but as many different challenges that surface at different stages of the lifecycle of an AI solution. Today, I would like to explore that idea in more detail.

What do I mean by claiming that AI has many centralization problems? If we visualize the traditional lifecycle of an AI solution we will see a cyclical graph that connects different stages such as model creation, training, regularization, etc. My thesis is that all those stages are conceptually decentralized activities that are boxed into centralized processes because of the limitation of today’s technologies. However, we should ask ourselves, if software development on-demand has traditionally been a centralized activity so what makes AI so different? The answer might be found by analyzing two main areas in which AI differs from traditional software applications.

Subjective & Static vs. Objective & Dynamic

Let’s take a scenario in which a large institution is developing a web or cloud application. The project team will start with an initial set of requirements that they will present to an agency or development group they trust. That group will conceptualize an architecture that fits the requirements and start a, hopefully, iterative development process. All throughout, the solution will undergo a series of tests until it matches the functional requirements and it will eventually be deployed on an infrastructure controlled by the original company or a trusted third party they trust.

That centralized lifecycle works well for traditional software applications because they are intrinsically subjective and static in nature. We select an architecture and technology stack for a functional problem but we have no objective way of knowing if our architecture is the best for the problem or how much better it is than alternatives. We follow methodologies and best practices created by subject matter experts but we have no objective way of knowing if there are the best fit for our problem. Similarly, the structure of our solution will be static in nature and won’t necessarily change in behavior or size based on its environment.

In the case of AI solutions, we still rely on subjective opinions when selecting a model to address a set of requirements but we have objective ways to evaluate the performance of that model against alternative. Similarly, we can objectively evaluate training, regularization and optimization methods and select the best performant options for our solution. In terms of the program structure, AI models will change and grow over time as it processes more data and acquires more knowledge.

How is all this related to centralization? Well, it turns out that subjective and static structures are intrinsically centralized while objective and dynamic models are better suited for decentralization. A government structure is subjective( we rely in the opinions and judgements of our leaders) and fairly static in nature( we don’t change cabinet posts every day) while the structures of democracy itself are very objective( we vote) and dynamic( elections results change over time based on demographics etc.). In the case of AI, the subjective and dynamic nature of AI programs are an evolutionary aspect pushing towards decentralization. The following figure might help to illustrate that concept.

I started this article by mentioning that AI doesn’t have a single centralization problem but many. Let’s explore a few of my favorites using our thesis of subjectivity and dynamism.

The Data Centralization Problem

AI is not only an intelligence problem but a data problem. Today, large datasets relevant to AI problems are controlled by a small number of large organizations and there are not great mechanisms for sharing that data with the data science community. Imagine a healthcare AI scenario in which any participant in an experiment could contribute their own data with the right security and privacy guarantees. Decentralizing data ownership is a necessary step for the evolution of AI.

The Model Centralization Problem

Your favorite consulting firm selected a series of AI algorithms for a specific problem but how do we know they are the best ones for that scenario? Have they been keeping up with the constant flow of AI research coming out of universities and research labs? What if a community of data scientists around the world could propose and objectively evaluate different models for your scenario? Wouldn’t that be great? In my opinion, decentralizing the selection of models and algorithms will drastically improve the quality of AI solutions over time.

The Training Centralization Problem

One of the main problems of AI solutions in the real world is that the training of the models is done by the same groups that create the models themselves. Like it or not that dynamic introduces a tremendous level of bias and prompts the models to overfit very frequently. What if we could delegate the training of models onto a decentralized network of data scientists that will operate under the right incentives to improve their quality? Training is another aspect of AI solutions that is regularly hurt by centralization.

The Regularization-Optimization Centralization Problem

We deployed our AI model to production but how do we know is performing correctly? Is its behavior improving or deteriorating over time? Can hyperparameters be tuned on a different way to improve performance? Paradoxically, we rely on centralized processes for the optimization and regularization of AI models that very often used the same data scientists that created the models. Imagine if we could use a decentralized network of AI experts to try to find bugs-vulnerabilities and try to constantly improve our model. AI regularization and optimization are intrinsically decentralized methods that are forced into decentralized processes today.

Not One but Many Centralization Challenges

As you can see, we shouldn’t be speaking of AI centralization in a single, generic term but as many challenges that are colliding to hinder the evolution of AI. The evolution of blockchain technologies as well as paradigms such as federated learning are slowly opening the door to more decentralized AI models and hopefully we will get there by solving not one but many of these problems.