paint-brush
What is the Best Response to Artificial Intelligence and Robots?by@erikpmvermeulen
1,877 reads
1,877 reads

What is the Best Response to Artificial Intelligence and Robots?

by Erik P.M. VermeulenApril 12th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Discussion of the social effects of new technologies — such as AI, robots or big data — normally distinguishes between two different attitudes or groups.

Company Mentioned

Mention Thumbnail
featured image - What is the Best Response to Artificial Intelligence and Robots?
Erik P.M. Vermeulen HackerNoon profile picture

People need to embrace AI and robots. Denying the impact of new technology creates a parallel world that risks harming society.

“Good” or “Evil”

Discussion of the social effects of new technologies — such as AI, robots or big data — normally distinguishes between two different attitudes or groups.

The first group focuses on the social benefits — or “good” — of such technologies. Such a view emphasizes the positive impact of AI, robots and big data on society. For example, they focus on how technology can stimulate healthy aging, fight poverty or protect the environment.

For instance, big data can assist doctors in making early diagnoses that facilitate more effective treatment. Autonomous and smart systems can help fight poverty and protect the environment. Consider the revolution in agriculture. Technology makes farmers more efficient in terms of fertilizing and harvesting crops.

On the other hand, however, there are the skeptics. People belonging to this group are more sanguine about the impact of technology and highlight the potential risks — or “evil” — of AI, robots and big data. They focus on the danger of smart machines getting smarter and are concerned about the fact that robots play an increasingly human-like role. In their view, artificially intelligent robots will take more and more jobs away from people.

And such fears don’t stop there. The skeptics warn of other more serious dangers. Their biggest fear is that artificial super-intelligence may eventually wipe out humanity, if we are not vigilant in staying one-step ahead of technological evolution. According to this dystopian view, humanoid machines that appear in movies (Ex Machina) and TV Series (Humans and Westworld) are about to make their entrance into the real world.

What is ironic is that the “good” and “evil” views on AI, robots and big data are not mutually exclusive. They often tend to supplement one another. In fact, both views are held simultaneously, often by technologists and people that have real experience with developing and scaling disruptive innovations.

It is perhaps not surprising that Elon Musk, Stephen Hawking and Bill Gates have joined other technology experts in warning about the possible dangers of machine learning and artificial intelligence, without necessarily disregarding the benefits of the new digital technologies.

A Culture of Denial

And yet, the real issue isn’t whether we fear or embrace the new digital technologies. After all, both groups acknowledge the profound social impact of such change. They just disagree on which direction we are going.

However, there is a third — and by far the biggest — group that is in denial about the social impact of AI, robots and big data. In particular, this group fails to recognize how the world has been transformed by rapidly evolving new digital technology.

One symptom of this culture of denial is that “old world” ideas, concepts, and models are employed in order to make sense of the new. Undoubtedly a source of comfort, such an approach is unlikely to be successful. After all, there is a disconnect between modes of thought that were developed in an era of industrial capitalism and the very different realities of today.

But there is more. A culture of denial risks creating a parallel world — a fictitious fantasy world, if you like — that is out of touch with reality. And this kind of “world-building” may pose a greater threat to society than the new digital technologies.

World-Building

In one of his videos, Evan Puschak, host of the popular YouTube series “The Nerdwriter”, explains how the phenomenon of “world-building” can be found everywhere today. It is no longer the preserve of fantasists such as Tolkien or C.S. Lewis. In fact, the current popularity of world-building is unsurprising given the emergence of the digital world. Let’s explain

There is no denying that disruptive technology, such as AI, robotics and big data, has transformed the “analogue world” into a “digital world”, and that this new world is increasingly structured around networks, computer code, algorithms and machine learning.

The social and economic effects of these technological developments are deep and profound. The exponential growth of digital technologies will only further reshape our conceptions of work, consumption and leisure. It is important to note here that we may still not have approached the “knee” of the exponential growth curve.

Yet, the third group continues to use old world models, methods, institutions and regulations to think about, understand, analyze and structure the very different and fast-changing new reality. This creates a “constructed fantasy”, which risks causing “unhappiness”, “confusion”, “unbelief”, “false hope”, etc.

Here are some everyday examples of the disconnect between the fast-changing digital world and the “parallel” world that is being constructed by politicians, researchers, business leaders and regulators.

Workplace innovation” has become fashionable in the business community recently. There is no doubt that the implementation of new technologies can improve the performance, commitment and creativity of employees. Yet, work place innovation is often limited to superficial change that simply creates new perils.

Take the recent trend towards open work spaces. As anyone who has experienced this new style of office will tell you, such open offices — particularly if combined with the type of “9 to 5” corporate mentality — often drive away talented workers because they disregard the importance of deep work and limit personal freedom and creativity.

Prototype of “Hushme” (a voice masking device that protects privacy in open space environments)

Or, universities and other academic institutions where the dissemination of research through new social media outlets is still not “accepted” when it comes to hiring and promoting researchers. In fact, most researchers generally still prefer to disseminate their research through more traditional outlets (e.g., academic journals or traditional academic publishing companies), even though the impact of such outlets has significantly reduced in recent years, as a result of technological changes.

Or, consider, in this context, politicians who — even though they increasingly embrace modern “social media”-based communication strategies to mobilize voters — still preach job creation through the protection (or return) of old and disrupted industries.

Examples of the creation and existence of a fictitious parallel world are everywhere. Corporations that continue to employ hierarchical and segregated governance structures that are disconnected from the more fluid and dynamic needs of 21st century organizations.

Or, policymakers who utilize “old world” regulatory models, but end up slowing down innovation and technological development.

So, What’s Next …?

Identifying the correct response to the “new normal” is not always easy.

What does seem clear, however, is that 20th century models are no longer adequate. Developed in an era of industrial production and national economies, most such thinking is simply not appropriate for a globally connected digital age. The use of such models merely feeds the creation of a parallel fantasy world described above.

Instead, business needs to promote “freedom and responsibility” within their organizations, making full use of new technology and interconnectivity. Universities should hire and promote researchers who demonstrate the capacity to create an online footprint. Politicians should start focusing on the potential of the current technological advancements to create new opportunities and new types of career. Corporations (and their executives) should develop a better understanding of the new digital world and adopt governance structures appropriate to this new reality. And policymakers should rely on a data-driven and responsive approach when it comes to regulating (or perhaps, in some cases, not regulating) disruptive technologies.

How can this be achieved? Well, for a start, we have to reinvent “social science”. While it could be argued that AI, robotics and big data have already become an integral part of “formal science” and “applied science”, it has yet to become mainstream in areas of social science, particularly in fields such as economics, law, political science, psychology and sociology.

Instead of trying to understand the digital world with traditional models from social science, what we need to do is to develop new paradigms for mapping and understanding the different aspects of our new reality. It is only in this way that the full benefits of our digital reality can be enjoyed. Otherwise, we risk feeding the culture of denial and the parallel reality that such denial seems to create.

Please push the “heart button” or leave a comment.

There is a new story every Wednesday. So if you follow me you won’t miss my latest ideas about how the exponential growth of disruptive technology is changing the way we live and work.