We live in an age of information overload, among always-on social networks, real time notifications, on-demand video, live streaming and attention-grabbing viral loops of content online. And because attention means advertising dollars, companies are driven to optimize themselves by exploiting hooks to drive usage, such as those that target our dopamine receptors.
Are we better for it? Information has its benefits - we expand our capabilities as we find what we need on demand to do the jobs we need to in our daily lives. We may be able to fix a toilet from a video on YouTube or learn how to A/B test landing pages from a blog post without having to have been taught it in the past. And not just for small topics; the proliferation of online learning is an effective democratizer of information from coding education to college courses. The cost to learn something new is limited more by time than money now.
As we push more into the shared repository of the online web, the information both entertains us and reduces the need to “know” it; we can look it up when necessary. We outsource a part of our memory and knowledge base to computers, which are well designed to handle the task, resulting in a successful division of labor.
And the legacy of technology the past few decades? In the larger sense, it’s a continuation of the scientific revolution, leading to and propelled by globalization and industrialization. But ever increasing compute capacity is having diminishing returns on economic productivity. Perhaps AI will return us to the productivity growth we have come to depend on -- as computers are trained to perform tasks as well as or better than humans can, and then are deployed to work without human limitations like fatigue, we may reap great benefits.
And on the flip side, we also see potential danger, and with that, the response is to both prepare for the worst, as well as ensure we don’t make mistakes along the way. If AI is to be a “black box,” then how can we make sure it operates in a fair, unbiased, ethical way? But maybe this framing of the future, as a struggle for or against the inevitable replacement of human functions by technology keeps us from realizing the potential of technology and keeps us from building what can solve the large problems in our society.
We need to change our mindset towards technology’s potential and focus its direction. It’s a given that technology will replace many human functions, and there will be a bumpy road towards this future. But we need to realize that we have a third choice available - that technology can increase human capabilities in more ways than being just a repository of knowledge, and by building in this direction, we can make the biggest impact on society and economic progress.
The inspiration behind the third path has been the ease to which social media has been able to hack human attention spans, which has shown the malleability by technology of cognition and the human thought process. But rather than manipulate for attention purposes, the better idea is to improve decision making, and thereby improve individual output and achievement of goals, which makes it a bottom-up approach to improving productivity.
In contrast to computer systems today, which function as an enhancement for long term memory, we need technology that augments short-term memory. This is the memory we use for day to day decision-making, forming relationships and, economically speaking, doing business. These are the tasks that affect our quality of life. In this way we can achieve productivity gains beyond the brute-force approach of automating or replicating human tasks, as well as build solutions that apply all the way from the bottom of the pyramid to the top.