Artificial intelligence is progressing at an astonishing rate. According to this quote from Stephen Hawking, it could be the biggest event in human history — can even be bigger than electricity.
“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” ~ Stephen Hawking
No doubt, AI will change the world, but what path would it take? Ethical or unethical.
As technologists, ethicists, and policymakers look at the future of AI, ongoing debates about the control, power dynamics, and potential for AI to surpass human capabilities raise the need to address ethical concerns — AI is too big to be controlled by a single entity or a company.
Introducing Decentralized AI (DeAI), the ability to host AI algorithms on a blockchain may help tackle these challenges. In this article, we’ll discuss how tokenomics can serve as a guardrail to ethical AI development on the blockchain. Without much ado, let’s get started.
Decentralized AI (DeAI) is the decentralization of the control of AI technology. It is at the intersection of AI and blockchain. DeAI includes the decentralization of the development framework and peripheral elements, which include the distribution of decision-making, ownership decentralization, data tokenization, deployment and running of models on-chain like smart contracts, or open-sourcing the development of machine learning algorithms and models.
Are you curious about DeAI? Here are some of its benefits over conventional AI frameworks.
The traditional centralized AI framework requires users to share data with a central authority. This poses a significant privacy risk to the users. With DeAI, AI algorithms can be deployed and run as smart contracts. So, you have control over your data because they’re processed locally. This minimizes the risk of data breaches and unauthorized access.
With DeAI, AI algorithms and models are fully developed and deployed as smart contracts on the blockchain. Since this works exactly like a Decentralized Application (dApp), it’s not susceptible to security breaches and failure.
With DeAI, all transactions and decision-making processes are stored on-chain. Actions taken by the AI agents can be traced and verified because they're stored on the open, distributed, immutable ledger. This enhances accountability.
So, instead of the opaqueness of conventional AI algorithms, DeAI provides you with transparency crucial for building trust with users, who can see how their data is used and how decisions are made. Moreover, the users and developers can decide to know the intricacies of the algorithm and scrutinize the code to validate and understand the algorithm’s inner workings.
Decentralization leads to democratization, which helps in building more varied and inclusive AI models that address a broader spectrum of needs.
Instead of being limited to the perspective of a select few like in centralized AI development, AI projects have access to a diverse pool of contributors, which foster an environment where expertise and perspective converge to shape the trajectory of AI application.
DeAI benefits from a multitude of insights, problem-solving approaches, and experiences. This enables independent researchers and small organizations to be part of the developments. It also fosters innovation and diversity in AI research.
AI has gained significant importance in today’s rapidly evolving technical landscape. Here are some of the challenges facing ethical AI development.
AI ethics require creators (developers and researchers) to be transparent about their models e.g., opening up about the data used in model training, providing enough transparency about the development stages, and giving a clear and comprehensive information about how models make decisions and function.
In the current development era, this is far from possible. AI often operates in a black box, i.e., there is little or no understanding of how the system works and how they arrive at their final decisions. Transparency is vital to ascertain how decisions are made and who bears the responsibility for what.
Bias refers to the tendency of an AI algorithm to produce results that are systematically prejudiced due to flawed assumptions in the machine learning process. Since models are trained with available data, biases can creep into the model through the biases already available in the data.
Engineers from certain parts of the world with different points of view, for instance, can unintentionally feed biased data to a model. Since this human-sourced data is propagated through deep learning, it inadvertently propagates the bias.
Let’s check out these stats: Autonomous driving systems are 20% worse at recognizing children than adults and 7.5% worse at recognizing dark-skinned than light-skinned pedestrians.
This bias is likely not intentionally inserted into the algorithm but it’s from the type of data that is used for training the model.
AI technology or algorithms often rely on large amounts of data for operation. So, user data must be handled responsibly and securely. There should be insight into how training data is collected, processed, used, and stored.
When building an AI system based on customer credentials or data, for example, you need robust data protection measures like encryption and anonymization.
So, preserving individuals' privacy and human rights is important. This necessitates safeguarding against data breaches, unauthorized access to sensitive information, and protection from surveillance.
As AI systems grow more sophisticated, concerns regarding safety, ethical alignment, and misuse surface. Decentralization of the technology, championed by web3 initiatives like Internet Computer Protocol, has been a pivotal concept in addressing the concerns discussed above. Let’s check out how decentralization currently addresses the key challenges to AI ethics.
Centralization of power can lead to biased decision-making, privacy bridging, and misuse of user data. With DeAI, AI model/algorithm development has now been decentralized. Decision-making is in the hands of people who are likely interested in the future of the technology.
Blockchain has helped provide clarity and conciseness in the muddy AI development water. The decentralization of the network has helped distribute power among interested parties and entities around the world. So, user data is no longer easily exploited and used against their will.
Decentralization prevents the susceptibility of the AI models/algorithm to a single point of failure. Data stored on the blockchain are shared across several nodes on the network. DeAI can withstand attacks, mitigate risks, and become more reliable.
So, even if one node falls to exploitation, the others can prevent bad actors from maliciously manipulating decision-making in the AI system (except there is a 51% attack on the network, which is very difficult to happen).
Decentralized applications are usually open-sourced, which allows many people to contribute to its development. Likewise in DeAI, stakeholders (which include developers, investors, users, etc.), can easily establish ethical alliances, i.e., there is no barrier to entry. Interested developers can join the project. As a result, people from different parts of the world can easily collaborate on a single project and reduce bias.
Tokenomics, a portmanteau of “Token” and “Economics” is the study and analysis of the economic aspects of cryptocurrency/blockchain products. It’s the study of the economic principle and mechanism that govern the issuance, distribution, and utilization of a crypto token.
In the token economy, tokens are mostly used as positive reinforcements to reward individuals that exhibit target behaviors.
In this section, we’ll discuss how tokens are used to incentivize responsible ethical AI development and usage. But first, let’s check how tokenomics functions in a decentralized system.
The role of tokens is a focal point of discussion in shaping the economic, functional, and user experience (UX) aspects of dApps. Tokens are used to facilitate various functionalities and economic interactions in apps.
Incorporating tokens in a web3 project should be based on specific goals. These goals include:
“Your heart will always be where your treasure is”
Internet Computer Protocol (ICP) aims to bring efficiency, speed, and decentralization to computation and data storage. This decentralized network enables you to build anything without the traditional IT or big tech.
ICP hosts decentralized serverless computers that are immune to cyberattacks, unstoppable, and controlled by decentralized autonomous organizations. ICP novel features include threshold cryptography, state machine replication, and novel consensus mechanisms.
Although ICP can run classical DeFi smart contracts, it can also deploy compute- and storage-heavy dApps like machine learning algorithms and image classification models fully on-chain.
AI workloads are compute-intensive. Running inference on an AI model with millions of parameters involves billions of arithmetic operations. So to run on-chain inferences, a blockchain needs the capacity to process this operation.
With the latest milestone reached in ICP, the Cyclotron, ICP can now run AI models fully on-chain. Here are the features:
A floating point arithmetic represents the subset of real numbers using integers with fixed precision. This feature is needed for AI computation. However, decentralized chains’ (like ICP) floating point arithmetic algorithms need to be deterministic. A deterministic algorithm means that a given input will always produce the same output every time, no matter where it’s run.
The implementation of the deterministic floating point algorithm means that the same code run on any node on the ICP network will always produce the same result. With the use of Wasmtime, ICP was able to implement this algorithm faster in the WebAssembly Virtual machine.
SIMD allows the CPU to execute multiple arithmetic operations with a single instruction. This feature is applied where the same value is added to a large number of data points. With this feature, smart contracts can use deterministic SIMD instruction and benefit from parallel computation. This is also a great advance in DeAI.
ICP with its scalable, decentralized infrastructure presents a unique opportunity to design tokenomics that align financial incentives with ethical principles. This strategy involves creating mechanisms that align financial incentives/rewards with the ethical principle of AI implementation or usage.
Here are some ways to design tokenomics to serve as a guardrail for Ethical AI development, deployment, and usage.
In decentralized systems like ICP, AI tokenomics can play a huge role in Ethical AI developments. Incentivizing developers who act per established standards and rules while discouraging unethical ones through punishment is good. Let’s check out some ways to implement these reward mechanisms.
Setting up a Reputation-based Incentive: this is a strategy that puts a reputation system in place for all AI developers on the ICP network. It’ll be like a rating system, which is available to the community. Each developer is rated according to their actions per the ethical standards and protocols.
For example, developers who build transparent, fair, and unbiased AI systems can be rated by researchers, users, and even other developers. And they would be rewarded with tokens based on their cumulative ratings by the community.
These rewards are tied to the accuracy of models, the lack of bias in their AI algorithm, and the adherence to ethical standards in the AI industry. The reputation system can also boost their on-chain social credit, so they can access grants and project funding.
Stake Slashing for Unethical Behaviors: Apart from setting up a reputation system to discourage bad actors, establishing punishment mechanisms (like stake slashing) can deter unscrupulous individuals from going rogue. Just as the proof of stake consensus mechanism, AI developers on ICP should also be required to have a stake in the ecosystem.
For example, to be able to work on special AI projects and developments, developers should have locked up a certain amount of tokens. You get rewarded for good behavior, and if you engage in unethical practices (like building nontransparent models or including biases in your algorithm) you should face penalties through the stake-slashing mechanisms. i.e., losing a portion of your locked tokens. Of course, decisions should be based on public audits of the project.
Decentralized governance in AI development ensures that the development and management of AI remain fair, transparent, and inclusive. Implementing this governance model will enable token holders to uphold ethical standards. Check out ways to implement this system on ICP.
Token-Weighted Voting: token holders (developers, users, and auditors) with a vested interest in the DeAI platform can vote on decisions and impact/influence decision-making concerning AI mechanics.
You can vote on AI projects, decide on punishments for unethical behavior, and determine the direction of project development. This type of voting ensures that people with a stake in the system have a say and influence in the direction of DeAI.
Setting Up Quadratic Voting System: just allowing anyone with a stake in a DeAI project to vote on decisions could lead to bias in decision-making. This is because people with a deep pocket would be able to easily influence decisions.
The quadratic voting mechanism reduces the influence of large token holders by making the cost of additional votes rise quadratically. With this, the decision-making power can be balanced across the community.
Although there are tradeoffs between being profitable and being ethical, ethics and profit should always be at equilibrium for DeAI. AI developers shouldn’t sacrifice ethics for profitability. With tokenomics, you can solve the challenge and enable the right balance between ethics and profit. Below are some suggested ways to strike the right balance between ethics and profit for DeAI.
Setting Up a Dual-token Model: separating utility from ethical incentives can help strike the right balance between profit and ethics. The utility token can be used to access services. The utility token could be staked to have a vested interest and used for decision-making. While the incentive tokens can be used to reward good actors.
With this, AI developers can innovate freely while still being motivated to meet ethical standards.
Ethical-based Bonuses: A developer with projects or products that meet ethical standards (verified through audits, voted upon by the community, or through an automated compliance system) could receive bonuses in incentive tokens.
With this, AI developers can continue to focus on developing ethical AI without thinking about profits.
Transparency is one of the fundamentals of the blockchain technology. It’s also a cornerstone of ethical AI. With token incentives, stakeholders can build and deploy AI systems with accountability in mind. Here are ways to foster transparency and accountability.
Audit Rewards: Just as token miners who solve mathematical problems to support the stability of the Bitcoin network are rewarded with tokens, the ethical auditors (people who review AI models and ensure they comply with ethical standards) should also be compensated with utility tokens.
The auditors can get rewarded for their work and be accountable. This also incentivizes a robust auditing ecosystem and ensures that AI models are continuously scrutinized for ethical compliance.
Proof of Transparency: implementing proof of transparency means that developers would submit key aspects of their models for scrutiny. This decentralized auditing/public scrutiny will help the AI decision-making process.
It will also help the DeAI community to learn about how the models were trained, and ensure there is fairness. In return, developers receive token rewards and badges as proof.
Long term sustainability of DeAI is important to achieve the overarching goal. To ensure that ethical AI development is a long-term focus, sustainable structures (like continuous rewarding of development in the DeAI niche) should be set up. Here are some ways to implement this.
Continuous Reward Structure: Instead of a one-off thing, token rewards should be distributed over time and on an ongoing basis based on the ethical performance of the developers.
By this, developers can keep maintaining ethical standards rather than achieving a certain certification and neglecting the ethical standard after that.
Decentralized AI Governance Fund: on ICP, a portion of block reward or transaction fee should be set aside in governance funds to support ethical AI projects. With this, there would be a steady stream of resources available to incentivize developers to build AI with a focus on Ethics.
In conclusion, Ethical AI development should be long-term. One way to make this possible is by rewarding good behavior and penalizing bad actors. Tokenomics serves as a guardrail to direct stakeholders on the path that needs to be taken. Based on the above implementations, we were able to see how tokenomics, if designed properly can steer us towards responsible development and usage of Artificial Intelligence.