Business & finance professor, digital lawyer, restaurant owner, board member & traveler.
Are you a “Techno-Optimist” or a “Techno-Pessimist’”?
This question kept popping up in the discussion on digitization, productivity and regulation at the OECD Global Forum on Productivity in Berlin on September 15, 2017. It seemed to frame much of the discussion and “divide” participants.
On the one hand, “Techno-optimists” love the current technological revolution.
For “optimists”, the pace and range of innovations, as well as the shorter innovation cycles, are a source of wonder and excitement. They believe that, on balance, technology will solve many societal problems and improve our lives. Governments could and should be doing much more to promote innovation, particularly in fields such as artificial intelligence and robotics.
On the other hand, are the “Techno-pessimists”.
For them, the current age is not experienced with excitement, but rather trepidation and fear. They believe that the current pace of automation is a serious problem and technological developments in the field of AI and robotics could, without proper vigilance, threaten mankind. Government needs to do more to control new technologies in order to reduce our technological dependency and mitigate unforeseen risks and dangers.
“Optimist or Pessimist”
“Wonder or Fear”
“Encourage or Control”
These are the distinctions that we often use to make sense of what is happening when thinking about new technologies. Certainly, these are the distinctions that have been used in trying to come to terms with previous technological revolutions.
But, speaking at the event in Berlin, made me realize that something else is going on. The difference between “techno-optimist” and “techno-pessimist” is actually irrelevant now. At least, it obscures the really important question that we all should be asking ourselves:
Am I willing to participate in co-creating an automated future and how can I develop the capacities and skills to contribute to this “project”?
To understand what this means and why it matters, it helps to think about how the current technological revolution is different from previous periods of technology-driven economic and social change.
Consider the major technology-driven “revolutions” of the last 250 years:
Commentators on these technological revolutions — check out Carlota Perez, for instance — give us a powerful framework for making sense of what happens during periods of technological-driven social and economic change.
In particular, there seem to be three “phases” that characterize such “revolutions”:
First, there is the “Installation phase”. A new technology is discovered, developed and scaled. During this phase, the core applications of the technology are established and introduced.
And third, there is the “Deployment phase” when there is a large-scale implementation and integration of that technology, throughout the economy and society.
But, what is really interesting is what happens between “Installation” and “Deployment”, i.e., the second phase.
All previous technological revolutions seem to have experienced a “Crisis phase” around 20–30 years after “Installation”. Expectations quickly outstrip the capacities of a new technology creating a “bubble” that, at some point “bursts”, promoting a painful process of “collapse and re-adjustment”.
Venture capitalist, Chris Dixon, summarizes this process as follows:
“Each revolution begins with a financial bubble that propels the (irrationally) rapid installation of the new technology. Then there’s a crash, followed by a recovery and then a long period of productive growth as the new technology is “deployed” throughout other industries as well as society more broadly. Eventually the revolution runs its course and a new technological revolution begins.”
In all previous technological revolutions, “government and business leaders working together” played a crucial role in helping society move from “Crisis” to “Deployment”. i.e., market failure triggered government “regulation”.
State-driven intervention involved a reactive process of “fact-finding” (gathering all relevant information/evidence regarding a new technology), “understanding” (identifying and evaluating the risks) and, finally, “regulation” (the imposition of a new legal framework that would control/regulate how a new technology would be deployed).
In this way, a “Crisis” provided a vital opportunity for state-managed learning about, and understanding of, new technology. Technologies could then be disseminated in a controlled and responsible way.
If we apply this framework to the present technological revolution, we will soon enter the deployment phase of the telecommunication and IT revolution.
Think about it.
The installation phase of the latest revolution started in 1971, followed by the dot.com crash of 2000 and the 2008 Financial Crisis. These crises could be seen as the turning point.
They should be followed by a process of state-managed “readjustment”, which in turn would result in deployment and “new ways of living”, “new ways of producing”, “new ways of communicating” and “new ways of working”.
However, there are good reasons to think that things might be different this time.
With the exponential growth of new technologies, such as artificial intelligence, blockchain, Big Data, robotics and synthetic biology, it seems counter-intuitive to argue that we are still in the computer and Internet era.
The age of the computer and the Internet is already over. At least, in the sense that deployment has already occurred. Instead, we have entered the “installation phase” of the next revolution, which could be called the “automation era”, as it involves a combination of new technologies.
We now live in an age of simultaneous and highly complex innovations across diverse fields.
The speed, range and persistence of technological innovation means that any kind of reactive learning and management is extremely difficult, if not impossible.
A paradox of recent new technologies is that they make our lives easier, but they make the world harder — perhaps even, impossible — to understand.
We live in a world of radical uncertainty, in which all we know is that there are many things that we do not know. As soon as we believe that we have a clear understanding in one context, a new development has already occurred that renders that understanding obsolete.
But more than that, “understanding” of complex man-made systems is now beyond human comprehension and it is a nostalgic fantasy to believe that we can still achieve that kind of understanding. For the first time in history, we live in a world where human technologies appear to be beyond human understanding.
The companies that were driving the information and telecommunication revolution are already being replaced by “younger” companies that are taking the technological evolution to the next level.
Government can no longer rely on partnering with established corporations when those very incumbents are facing constant challenge from innovators.
Moreover, new and upcoming technologies are less centred around a specific place (innovations are emerging everywhere), which makes it much more difficult for business leaders and national governments to develop a coordinated and controlled deployment.
The “deployment” of new technology is much less controlled than in previous technological revolutions.
Consumers also play a much more important role in the acceptance and dissemination of the technologies.
Business leaders and governments are no longer in the driver’s seat. They no longer have the capacities to lead the process of readjustment and deployment that has previously occurred. Deployment is no longer (mainly) a “top down” process, but “bottom up”.
Listening to the discussion in Berlin made me realize that the “technological revolution theory” needs to be revisited. At least, as it applies to the present / future.
Well, for a start, we should not wait for a crisis and then try to “figure out” what’s happening. “Reacting” is no longer an option.
Governments need to shift from a focus on “understanding” the risks and then designing a framework for the controlled deployment of technology to active participation in the project of co-creating the future in partnership with other stakeholders.
In order to do this, it becomes necessary to have “touch points” (sensors) in society, that facilitate constant dialogue and collaboration with innovator-entrepreneurs.
The emergence of new regulatory models around “regulatory sandboxes” and “regulatory testbeds” shows that (at least some) governments understand this need for a more proactive and dynamic role.
Clearly, these regulators believe that pre-emptive dialogue and collaboration can enable / facilitate the process of designing better regulatory solutions.
The business leaders of today’s most successful companies also seem to get it.
They are spending more on innovation, setting up innovation labs and focusing on becoming more agile.
There is still a lot of theatre and window-dressing, but the “winners” have been able to transform their companies into “open and inclusive ecosystems”.
In particular, they have created more engaged and fluid relationships with start-ups (often acquiring them, but letting them keep their own identity and culture).
The most successful companies also have a more open relationship with consumers. Even B2B businesses understand the need for much greater engagement with the end-consumers.
Well, everybody needs to ask themselves:
Am I willing to participate in this project of co-creating the future together?
What can I contribute and how can I build the skills to make a more meaningful contribution that will provide me with greater personal satisfaction?
The really important distinction is not between the “optimists” and “pessimists”, but between those who chose to ignore what is happening, those who still try to understand and those who seek to actively participate.
But, it still amazes me how few people are actually engaged in, or even thinking about, this project of “co-creating the future”.
When you engage with the writers and readers on Medium it appears that we are all part of the discussion. But it surprises me how few colleagues, students (including millennials), etc. are aware of what’s happening (particularly the challenges) and the possibilities that are now available.
Education plays a crucial role.
Everybody needs to be better equipped to somehow participate in the “process of co-creating the future”. The focus should be on creativity, design, speed, agility, choice, freedom and responsibility.
These are the values of the present and the future, and more needs to be done to give people the skills to operate in this flatter and more open world.
In this new increasingly automated world, it doesn’t matter whether you are a “techno-optimist” or a “techno-pessimist”.
Neither option seems to make much sense when we can no longer understand or control technological developments in the way that was possible in the nineteenth and twentieth centuries.
In this new world — where the only real choice is whether to participate in the project of co-creating the future or not — everyone needs to be asking themselves how they want to respond to this challenge, and what they might do to improve their skills and maximize opportunities for personal fulfilment.
These seem like much better and more important questions.
Thank you for reading! Please click the 👏 (which can be clicked as many times as you want) below, or leave a comment.
There is a new story every week. So if you follow me you won’t miss my latest insights about how the digital age is changing the way we live and work.