The economic disincentive in letting AI evolve beyond a critical threshold
Simply put, an economy is an interrelated system of demand and consumption of services and products. Company A consumes services and goods produced by Company B under the assumption (or empirical evidence) that this interaction will make them ‘better off’ (this term may have multiple interpretations but usually profits are a good synonym).
Since it is not possible for a single company to efficiently produce all products and services, we see companies orienting themselves to produce a diverse variety of goods and services; companies are usually classified under a sector like semiconductors, automotive, consumer banking, etc. based on what they produce.
What is interesting to note here is that the final downstream consumer is a human being for most companies. For example, Lam Research (LRCX) manufactures highly complex equipment for semiconductor companies who in turn sell these semiconductor devices to automotive manufacturers and electronics manufacturers who in turn sell them to human beings.
Such a chain of sequential customers for any company will most likely end up with a human being as the final customer.
Human evolution has been a continual process over the timeline of several millions of years with our version, the “homo sapiens” having likely first evolved maybe ~ 300000 years ago.
We have built the traits and characteristics most optimal to ensure survival in our local conditions over a painstakingly long and slow process of survival of the fittest.
The evolution of AI has been frighteningly fast, however. Over the past 50 - 60 years AI has evolved from expert systems to ChatGPT. Assuming that the rate of evolution maintains (even roughly), it would be reasonable to argue that AI can potentially evolve to a higher stratum of capability and abilities (higher than human beings) much before humans evolve into some higher entity.
If we assess historic instances of mass shifts in the economic order, human beings as a species are needed to enable the emergent processes and systems required for the new economy. For example, with the impact of industrialization on agriculture, it was feasible and practical to replace the large number of farm hands on a field with a few machines and processes.
At the same time, the complexities entailing from industrialization (industries) created employment in consequently emergent fields like logistics, accounting, etc.
In a situation where we have a super-capable AI (Statement 2), human beings will find themselves in a situation where it is economically impractical for companies to choose humans over AI.
As a result, a significant in not 100 percent of the population tends to be unemployed, whereby the per-capita income is reduced which, in turn, means the source of income for corporations and companies deplete (Statement 1).
So, there’s an economic disincentive for corporations to let AI evolve unrestrained.
If there is a broad consensus across human beings on a quantifiable level to which AI can evolve, we can sustain an economic order where human beings are still part of the economic engine. This would, however, be against the interests of a free-market economy.
I guess a concluding remark here is that we can take solace in the fact that it is very unlikely we will be approaching an Artificial General Intelligence system capable of surpassing human beings in all fields, anytime soon.