I think everyone agrees that artificial intelligence is a “game-changing technology”.
For sure, it is still in the early stages of its development and current expectations are often set too high. Singularity — the point where artificial super-intelligence surpasses human intelligence — is still “relatively” far away.
Yet, AI is already changing many fields of life and this change is only set to continue. This became very clear when I visited Japan again last week:
There is simply no turning back.
Everyone needs to understand artificial intelligence and integrate discussion about AI into their own particular area of expertise.
But are we doing enough?
When I attend conferences and events, I get the feeling that the answer to this question is: No, we aren’t doing enough.
Let me explain by sharing my experience of conferences and other events.
It is always great to speak at conferences.
I love to engage with other participants and share my experiences and insights about the impact of the digital revolution on the world.
Even in a digital age, the experience of having face-to-face engagement with an international audience adds tremendous value.
This is particularly true when personal experiences are shared or predictions about the future are made. Such presentations can be inspiring and motivating, and cannot easily be replaced by other forms of communication.
Conferences are a unique opportunity to make you think, encourage dialogue with other participants and spur creativity.
Yet, my own recent experience of many conferences is disappointing. Most presenters continue to focus on traditional debates without paying attention to the challenges and opportunities created by new technology. Participants want to remain in their “comfort zone”, and that means focusing on the conventional issues.
And when I say “traditional debates”, I mean very “traditional”. I attend many business and law related conferences, for instance, and if we built a time machine and travelled back ten or more years, the same issues would be under discussion. Very little would have changed.
There are, of course, exceptions, but generally speaking the issues and arguments are settled, and all we get is a repeat performance. Even when “new technologies” are discussed, old models are used to frame the discussion, understand and explain their implementation and effects.
For sure, this is a pity. It is a lost opportunity, because — even in a digital world — conferences have enormous potential to be a unique “platform” for engaging with the meaning and effects of important new technologies and their applications.
This is particularly true for artificial intelligence.
AI is more than a “tool” that improves, for instance, manufacturing processes. It is more than the next step in compliance. It is more than a system to make predictions that facilitate action. It is probably even more than a disruptor of “knowledge work”, more generally.
AI has the potential to transform every aspect of how we live, work and do business
The more I think about it, the more I am convinced that AI will affect the way we “trust”.
Who, what and how we trust are all being transformed as AI becomes more integrated into our everyday lives.
This may sound a little far-fetched, but there are already many examples of how we increasingly place our trust in algorithms, software and computer code.
We buy products, book accommodation, make reservations at restaurants based on reviews and recommendation algorithms. We trust Wikipedia to give us the correct information. When we ask Google a question we expect — and trust — that we receive the correct answer.
We all may have some residual suspicion of big tech companies, but in our everyday lives we tend to put that scepticism to one side and “trust” in the technology.
As such, we already live in a world of so-called “ubiquitous computing”.
Computing is now embedded in all aspects of our everyday lives.
Computer code provides the unseen and unnoticed “architecture” structuring our whole existence. We find a plethora of examples in our work, recreation, communication, consumption, travel, or education. All of these areas of life, as well as the choices associated with these activities, are increasingly organized by and around digital technologies.
Think about how much of our lives is spent interacting with devices that are, at a deep level, operating digital code.
Such interaction can be direct and proximate, varying from interacting with a smart phone or computer to — more distant — traveling to work on a subway system that is automated, in various ways.
In both cases, it is computer code that makes the experience possible and computer code that, ultimately, provides the structure and choice associated with that experience.
With products getting smarter and connected, it is easily to predict that we will only have more and more “trust” in machines.
In fact, combined with other fast-moving technological developments in the areas of blockchain, IoT, sensors, autonomous driving and big data, it is only to be expected that AI will play a dominant role in our future lives.
And there is no doubt, we will all trust AI. Everything is pointing in that direction.
And this is where things become interesting and important.
Can we really trust AI?
Should we just let things run their course and accept the consequences?
Most people I talk to tend to agree: “We shouldn’t just trust AI”.
We should try to understand how AI is already affecting our lives. After all, the impact will be much more significant than the introduction of the Internet.
In order to build trust in AI, we need to have detailed discussions at all levels (and not only among the AI specialists). We need to focus on what it means, how it is already affecting our lives and how it will affect our lives in the future.
Recently, I have been thinking more and more about this. And whenever I am speaking at or attending a conference at home or abroad and AI is not on the agenda, I believe that we are missing an opportunity.
After all, conferences are an excellent opportunity to start discussing the impact of AI on fields outside of “technology”.
We have to ask questions about AI, understand how AI systems are trained, where the data is coming from, etc.
In particular, we need to think about the values or “ethics” that structure how AI operates.
For example, how do we want an autonomous car to react when confronted with an unavoidable accident? Should it minimize the loss of life, even if that means sacrificing the occupants of the car or should it prioritize the lives of the occupants at any cost? Alternatively, should the choice be a random one?
Transparent, open and inclusive dialogue seems to be the best way to build real trust in the systems that will structure our lives in the future.
My visit to Japan made very clear that we need open discussion and dialogue about artificial intelligence and the digital transformation, more generally.
We should understand that AI will have a much broader impact than most of us realize. Particularly, policy makers and governments should encourage and participate in discussions about the meaning and effects of AI.
It’s great to have conferences, but even if the topic is not even remotely linked to digital technology, we have to ask ourselves whether and how the topic will be affected by technology in the future.
Create your free account to unlock your custom reading experience.