In today’s world, Artificial Intelligence (AI) is a centralized process that helps organizations to achieve their objectives. Central authorities gather data and information through their platforms and create AI suitable for their own needs. For instance, Facebook uses your activity data on Facebook and Instagram to personalize news feed that suits your personal beliefs so that you will stay on its platforms longer. User data is often kept within themselves in their servers to gain competitive advantages against the competitors.
End users like us enjoy free products provided by those central authorities but do not realize they are manipulating our behaviors and making a good fortune from our data. As such, decentralizing the three key components of AI - data, models, and results - can deliver trust and confidence for users to fully embrace a world with AI coexistence.
We live in a data-driven world. Every interaction and activity you make on any technological device involves you sharing personal data. Almost all apps and platforms you are using on your smartphone or computer are centralized, such as Amazon, Uber, Netflix, Apple, and TikTok. Yet all have encountered some forms of privacy breaches.
France fined Google $120M and Amazon $42M for dropping tracking cookies without consent in 2020 (link).
Apple was fined $12 Million in Russia for violating anti-monopoly rules with its app store in 2021 (link).
China tightens rules on technology companies for anticompetitive practices in 2021 (link).
The list can go on; the point is sharing data helps make life easier, more convenient, and connected. However, quite often, central authorities overstep their boundaries. Your data must be used only in ways you would reasonably expect.
Feed ten thousand images of dog and cat, and we could create an app that identifies if there is a dog in a photo or not. Sound simple and useless, right? Now, unlock your iPhone, open the Photos app, and locate People & Places. Do you notice that Apple has identified who is in which photo?
The above examples are realized by AI models, and AI models are created from a process called training. The more data a model is provided, the more accurate a model can be. Having more users using their apps and platforms could generate more unique data that could be used to train and re-train their models to perfection or to violate our rights.
AI models are capable of processing large data sets and making decisions relevant to the tasks and objectives. Organizations incorporate AI models into apps and platforms to provide different services base on the AI results.
Amazon is particularly good at providing AI results to its users, known as the recommendation system. While you are shopping for a Bose speaker, it would show speakers from other manufacturers for comparison. The recommendations help us make a better purchasing decision. However, Amazon also manufactures its own variants of products. What if a neutral marketplace like Amazon is putting its products before its competitors, or has Amazon been doing it already? The action would greatly increase the chance of its own products being purchased over its competitors and gaining an unfair advantage.
In 2018, The EU General Data Protection Regulation (GDPR) enforced a law requiring apps that “any decision made by a machine be readily explainable.” GDPR also allows users to opt-out altogether (recall the “Accept All Cookies” button from every website you visit?). The California Consumer Privacy Act (CCPA) also dictates that customers in California should be able to opt-out of the sale of their data.
There are countless examples that central authorities have been overstepping their boundaries to favor themselves when using AI whether it’s using data without user consent or promoting a neutral platform with intentionally biased AI decisions.
Imagine if these AI services could prove to you how and when your data is being used - a mechanism that a third party could verify these AI services. Every transaction and decision made by these AI services are transparent for audit and can not be altered.
Blockchain is a technology that is decentralized, immutable, transparent, and secure. With blockchain, data and AI results are recorded on the blockchain but not controlled by any authority. Your data is controlled within your hand. You could decide to give permission to access your data or not. Every transaction and interaction is traced and provided an audit trail. Central authorities could prove that they are not manipulating our behaviors or suppressing their competitors when they are not supposed to.
Blockchain is a key technology that could bring trust to AI services. It helps us have control over our data, provide transparent AI models, and make the AI results transparent for verification. By integrating blockchain with AI, could we fully trust the results and outcomes derived from AI services.