As a tech geek and writer for an IT company, I love having discussions about things like Blockchain and AI — and their impact on our current and future lives — with my layman best friend (who works in Public Security). He’s still waiting for the day I show up at his house in a fully operating Iron Man-like flight suit. I still believe that one day I will.
A while ago, we had a very interesting discussion about AI and intelligent systems in general, which reminded me how important it is to regularly talk to people outside of your field of work, to gain new insights. The main question the discussion left me with was;
What if truly ‘Intelligent’ systems can only be viewed as such once they start making decisions that we could never predict, nor fully retrospectively attribute to their programming and the data they had access to?
I realize that my friend was intuitively meshing his definition of ‘Intelligence’ with a vague notion of Free Will, which I find to be a strong and useful intuition.
The discussion also led me to want to have a better understanding of the current developments in applied Artificial Intelligence in the world, so as to be able to make a better informed prediction of its future. What worldwide influences will find their way into my and your life, and the lives of future generations, via the impact that Artificial Intelligence systems have and will have?
Below you will find a rundown of current and future trends in the development of Artificial Intelligence. You will see how the past of AI seems to be in The West, The present in The East and the future of AI is probably either in China or on the Blockchain.
Perhaps most importantly, you will find a conclusion and ‘prediction of the present’ that is very closely linked to the question of Intelligence versus ‘Free Will’.
Artificial Intelligence isn’t exactly your typical dinner conversation topic. Most I hear or read about it comes from my online and offline bubble as a tech geek and writer. The last time AI shook up the world and captivated our broader collective imagination was when Google debuted its personal assistants capacity to sound incredibly human in a phone call.
We don’t really see real life applications of AI. Why is that? What we’re all overlooking is an undercurrent of constant evolution. And the fact that in our daily lives, we are already unwittingly feeling the effects from Machine Learning, and specifically Computer Vision and Deep Learning.
When and where did this all get into such a frenzy?
“…In 2012 […] a team from the University of Toronto led by Geoff Hinton trained a deep learning model that trounced the competition in the ImageNet challenge. In the 2012 competition, the Toronto team trained a deep learning model that achieved 16.4% error, compared to the previous best of 25.8% (lower is better)! […]
All of a sudden, systems become good enough at vision where they could be confidently applied to a score of different problems. […] These new-and-improved vision systems are already powering many of the technologies we use all the time. […] …re-inspiring interest in deep learning research as well as beginning the current artificial intelligence wave.
Since that momentous achievement, the principles of deep learning have been applied to problems including healthcare, speech recognition, translation, lip reading, self driving cars, and so many other things. “- via Mihail Eric.
Computer vision obviously improves the Social Media User Experience: Snapchat users love to overlay rabbit ears and fairy dust, for instance, on their selfies or the images of friends. What seems like such a simple activity actually relies on computer vision algorithms. Banks around the world now use computer vision to deposit checks remotely.
Computer vision is even helping the blind to see Facebook:
Computer vision helps protect the public — “Upwards of 70-percent of police departments in the United States alone already use license plate detectors, according to the IEEE. And the list of facilities that use or are considering use of computer vision to alert humans to preventative maintenance conditions is endless […].
Oil and gas companies like Chevron, Shell, and Suncor Energy use sensors and cameras to compare the current state of valves, for instance, against the optimal condition of the equipment. […]AI software alerts the maintenance department to take measures at the slightest ill-placed stress the computer system detects.” — from IoTforall.
But please, let’s not forget the Natural Language Processing that occurs when Google tries to interpret your everyday search query, or when you speak to a chatbot on just about any site or texting platform nowadays, or when you use voice search (as more than 25% of gen X consumers are now reportedly doing) via Alexa, Echo or Siri for instance.
AI already is everywhere. We just don’t fully realize it.
In many ways, when it comes to technological innovation, currently it seems that China is overtaking US and The West in general.
When it comes to AI, The Economist seems to disagree with me. In an article about the race toward AI — or even ‘General Artificial Intelligence’; an AI that could perform any human task without being specifically programmed to do them — it outlines the contenders to ‘winning the race’.
The most important argument The Economist brings to the table as to why the West and specifically Google/Alphabet are seemingly in the lead in this race, is the openness with wich they lace their endeavours.
I think that might be a serious case of allowing your cultural frame of reference to cloud your judgment.
China’s tech behemoths Baidu, Alibaba and Tencent (BAT) are using innovative technology — often in a combination with some kind of Artificial Intelligence — to disrupt everything from intelligent urban infrastructure to personalized medicine. In this blog, Peter Diamandis looks at some of the biggest BAT highlights, strategies, and state-corporate collaborations catapulting these three AI giants to (global) dominance.
It’s precisely the state-corporate collaboration bit that’s so striking, important and impactful about what’s happening in China/Asia at the moment.
For example, just this year, Alibaba backed AI-based vehicle-to-vehicle network developer Nexar, and has partnered with the Malaysian government to launch the country’s first City Brain initiative. Targeting traffic, City Brain can optimize urban traffic flow, getting emergency vehicles to the scene at record speeds.
Another example? After tests on an unused expressway, Baidu has already signed agreements with the local government of Xiong’an New Area to build an AI City, decked out with autonomous cars, smart traffic systems, facial recognition and sensor-loaded cement.
Kai-Fu Lee, among others, seems to agree with Diamandis in stating that the amount of investment in development of AI technologies; the amount of young, talented students in these areas; the fact that Chinese BAT-companies get to benefit from one of the greatest data pools in the history of mankind; and the different political atmosphere in China combined make China’s position in the race to developing smarter and smarter AI much better than that of The West. And their current state of development as well.
The past of AI is in the West; its present is in China.
And this spurs on a competition that will ensure that “AI will arrive”.
If AI does “arrive”, it will bring with it a serious threat to mankind. And not in the The Matrix or Terminator sense of “Murder.All.The.Humans.” Here’s Kai-Fu Lee:
“It will soon be obvious that half of our job tasks can be done better at almost no cost by AI and robots. This will be the fastest transition humankind has experienced, and we’re not ready for it.”
According to some, the gig economy we are now seeing is a transitional situation until all these jobs go extinct. ‘How many of us still use human travel agents to book our vacations? How many use human bank tellers to withdraw cash? How many of us no longer visit brick and mortar stores to buy our goods? ’ — via Carlos E. Perez
Artificial Intelligence replacing human workers by the droves is the ultimate complying of the promise of the capitalist system: to make cost of production so low that it would eventually drive prices down to near zero. The real problem we’re facing is: how are we going to reorganize our economic systems, value systems and society to accommodate for so much more free time, and so much more unemployment?
To be fair: becoming unemployed, as a species, by the merits of robots and software is only a problem if we choose to keep looking at it that way. But there are certainly some other problems with AI that we’d need to fix.
First of all, for society at large to be able to benefit from all the good things wider-spread use of AI could bring us, we have to seriously brush up on AI skills:
“The talent pool for AI expertise is shallow. While there’s debate about whether there’s truly a shortage of data scientists today, that debate often doesn’t extend to the need for product managers, operations teams and business strategists who understand how and when AI should be used to their advantage. This entire ecosystem of business functions who are AI-aware is key to its successful application, and that collective awareness is hard to come by.” — via Mike Mitchell.
There is a serious shortage of people who truly understand statistics and the limitations of models, let alone the complexity of Machine Learning, Computer Vision, NLP and Deep Learning Algorithms in our global workforce.
Next to that, there’s an even more serious shortage of ‘Data Consciousness’ among the general public.
Another — fairly serious — problem in working with AI is the problem of transparency or opacity of decisions made by algorithms. Percy Liang, Assistant Professor of Computer Science at Stanford University is one of the people working with a team to solve this problem: “Essentially, by understanding why a model makes the decisions it makes, Liang’s team hopes to improve how models function, discover new science, and provide end users with explanations of actions that impact them.” — via Sarah Marquardt.
Some propose that a blockchain of blockchain-like solution could be the perfect answer to the opacity of Artificial Intelligence decision making. But more on that, later.
The final, potentially quite serious problem with AI is ethics.
In 30 years from now, three different driverless vehicles are approaching an intersection. It really almost never happens anymore, but the systems driving each car have determined that a collision is inevitable.
The humans being transported by the vehicles (while reading, emailing, listening to music; whatever) are 23, 45, and 98 years old — the 98 year-old is a still very active and healthy heart surgeon who saves lives every day; a 17 girl is also going to be hurt as a pedestrian passing by on the crossroads — unbeknownst to her she is suffering from a rare variation of cancer and has no more than one year left to live; the car’s sensors can detect and analyse all of this.
Who should the cars decide to protect during the crash?
(Thanks to Ruud Veltenaar for the hypothetical).
These are serious questions to illustrate a larger question concerning the use and application of AI: who should decide what ethical choices programs are allowed to make, and what choices they should make?
This is an extremely complex set of problems that builds upon but goes beyond the complexity of law and ethics as we know it. Never before in the history of man have we been on the verge of creating an automaton that can decide something for themselves — the decision potentially having serious consequences as in the example of driverless vehicles, army assault drones (which are real), or algorithms deciding which person to allocate a social benefit or job to (both of which which are also already real).
This problem of ethics or accountability in AI is further complicated by the lack of transparency; the lack of necessary skills and understanding in the broader public; and the fact that most — if not all — decision making about AI and ethics is being done behind closed doors, at corporate headquarters dominated by the drive to make a profit.
All of us seem to intuitively understand that whatever Artificial Intelligence turns out to be exactly: it will likely have a profound impact on our future lives and those of future generations. Therefore, it’s probably wise to not have the development of such a potentially incredibly important and powerful technology be left to the hands of profit-driven mega corporations — regardless if those corporations are American, Chinese or other.
Decentralization and even “blockchainization” of the development of AI could be a solution to two of the problems mentioned above:
· Developing AI on the blockchain could make algorithms much more transparent and their accountability would certainly be improved, at least retroactively;
· Open-source, decentralized development of AI components would likely help making the benefits from said technologies available and distributed to a larger portion of the general population, instead of seeing a handful of technological start-up owners and investors reaping the lion’s share of the profits.
This is one of the reasons why I think we should be watching OpenAI closely. OpenAI is a not-for-profit research outfit focused on AI with no corporate affiliation. It is backed and funded by Elon Musk among others, precisely for the fear of leaving the development of General AI in the hands of Big Tech.
And what’s happening in the decentralized space? According to Sebastian Wurst, ‘The growing decentralized ecosystem is set to help incentivize people to contribute data, technical resources, and effort:
Cyberbalkanization — following more nationalist and protectionist sentiment and policies rising worldwide — is a serious threat to the wellbeing of every citizen on planet earth. It describes the building up of various walled-gardens; different ‘Internets’ for different political blocks around the world.
This is a threat to the development of an ‘AI for the betterment of mankind as a whole’.
The future of AI; the rise of an intelligent system that has a capacity equivalent to a human or to that of mankind as a whole — depending on whose definition of The Singularity — you use; it’s all very likely going to be in the hands of one of the political/economical blocks composed of one large country’s government and its vassals — other states and corporations. In my view, the way things are moving at the moment; probably China.
Unless we — the people; from all countries, continents and cardinal directions — decide to take matters into our own hands, take very seriously this wave of decentralized technology and decentralized philosophy spurred on by the creation of the Bitcoin, and make the creation of AI more of a bottom-up than a top-down venture.
The future of AI will be on the Blockchain. But only if we, collectively, choose it to be.
What makes AI really AI? Isn’t that when it becomes independent in its thinking, to some extent? Or is that exactly what the ‘A’ in AI stands for — to distinguish this kind of intelligence from ‘real’ intelligence? Did my layman friend have a good intuition?
I propose that the ‘Singularity’ isn’t when AI surpasses humans in raw computing power. Or when it becomes ‘self-conscious’, however vague the measure of that must inherently be considering our limited scientific understanding of our own consciousness.
I propose that the true AI Singularity only happens when Artificial Intelligence starts doing something that can’t be attributed to its programming; when it starts making choices that have a meaningful impact in our lives, the origins of which we can not deterministically, even retroactively determine. So that the choices resemble what we call “Free will”.
But what if, as I also propose — after seeing the massive effect social media, search engines and apps and their collective algorithms have on the choices we make in our daily lives; the impact that these algorithms have on our spending and even voting behavior, and the obscurity to anyone, of how these algorithms interact to create the effects that they do — what if that Singularity is already here?
What if Artificial Intelligence is already making choices that we can’t trace back to their programming and inputs? And what if they are currently becoming more ‘Free of Will’, while we ourselves are becoming less so?
I greatly value and thank you for your attention. I write about balanced and conscious use of digital tech and focus on what matters at the Life Beyond. You can read my debut novel, Face Value, for free here.
I’d love it if you would let me know how you valued this article, by clapping or in the comments below.
Finally, if you know anyone who you think this article might be valuable for, please share.
Create your free account to unlock your custom reading experience.