paint-brush
AI: Scary for the Right Reasonsby@vkhosla
9,415 reads
9,415 reads

AI: Scary for the Right Reasons

by Vinod KhoslaSeptember 13th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Artificial intelligence, AI, has grabbed headlines, hype, and even consternation at the beast we are unleashing. Every powerful technology can be used for good and bad, be it nuclear or biotechnology, and the same is true for AI. While much of the public discourse from the likes of Elon Musk and Stephen Hawking reflects on sci-fi like dystopian visions of overlord AI’s gone wrong (a scenario certainly worth discussing), there is a much more immediate threat when it comes to AI. Long before AI goes uncontrollable or takes <a href="https://www.forbes.com/forbes/welcome/?toURL=https://www.forbes.com/sites/valleyvoices/2014/11/06/the-next-technology-revolution-will-drive-abundance-and-income-disparity/&amp;refURL=https://www.google.com/&amp;referrer=https://www.google.com/">over jobs</a>, there lurks a much larger danger: AI in the hands of governments and/or bad actors used to push self-interested agendas against the greater&nbsp;good.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - AI: Scary for the Right Reasons
Vinod Khosla HackerNoon profile picture

Artificial intelligence, AI, has grabbed headlines, hype, and even consternation at the beast we are unleashing. Every powerful technology can be used for good and bad, be it nuclear or biotechnology, and the same is true for AI. While much of the public discourse from the likes of Elon Musk and Stephen Hawking reflects on sci-fi like dystopian visions of overlord AI’s gone wrong (a scenario certainly worth discussing), there is a much more immediate threat when it comes to AI. Long before AI goes uncontrollable or takes over jobs, there lurks a much larger danger: AI in the hands of governments and/or bad actors used to push self-interested agendas against the greater good.

For background, as a technology optimist and unapologetic supporter of further development, in 2014 I wrote about the massive dislocation in society AI may cause, and while our economic metrics like GDP, growth, and productivity may look awesome as a result, it may worsen the less visible, but in my opinion, far more critical metrics around income disparity and social mobility. More importantly, I argued why this time might be different than the usual economists’ refrain that productivity tools always increase employment. With AI, the vast majority of current jobs may be dislocated regardless of skill or education level. In the previous industrial revolution, we saw this in agriculture between 1900–2000, when it went from a majority of US employment to less than 2%, and in industrial jobs, which today are under 20% of US employment. This time, the displacement may not happen to just lower skill jobs — truck drivers, farm workers and restaurant food preparers may be less at risk than radiologists and oncologists. If skilled jobs like doctors and mechanical engineers are displaced, education may not be a solution for employment growth (it is good for many other reasons) as is often proposed by simplistic economists who extrapolate the past without causal understanding of reasons why. In this revolution, machines will be able to duplicate the tasks they previously could not: those that require intellectual reasoning and fine grained motor skills. Because of this, it is possible that emotional labor will remain the last bastion of skills that machines cannot replicate at a human level and is one of the reasons I have argued that medical schools should transition to emphasizing and teaching interpersonal and emotional skills instead of Hippocratic reasoning.

We worry about nuclear war as we should, but we have an economic war going on between nations that is more threatening. The pundits like Goldman Sachs advocate internationalism because it serves their interests well and is the right thing if played fairly by all. And though the wrong answer, in my view, is economic nationalism, the right answer goes far beyond just a level playing field. While Trump-mania may somewhat correctly stem from feelings of unlevel playing fields in China, the problem is likely to get exponentially worse when AI is a factor in these economic wars. This problem of economics wars will likely get exponentially amplified by AI. The capability to wage this economic war is very unequal among nation states like China, USA, Brazil, Rwanda or Jordan based on who has the capital and the drive to invest in this technology. As it’s mildest implications, left to its own devices, AI technology will further concentrate global wealth to a few nations and “cause” the need for very different international approaches to development, wealth and disparity.

I wrote about the need to address this issue of disparity, especially since this transformation will result in enormous profits for the companies that develop AI, and labor will be devalued relative to capital. Fortunately, with this great abundance, we will have the means to address disparity and other social issues. Unfortunately, we will not be able to address every social issue, like human motivation, that will surely result. Capitalism is by permission of democracy, and democracy should have the tools to correct for disparity. Watch out Tea Party, you haven’t seen the developing hurricane heading your way. I suspect this AI driven income disparity effect has more than a decade or more to become material, giving us time to prepare for it. So while this necessary dialogue has begun and led to the ideation of solutions such as robotic taxes and universal basic income, which may become valuable tools, disparity is far from the worst problems AI might cause and we need to discuss these more immediate threats.

In the last year alone, the world has seen some of the underpinnings of modern society shaken by the interference of bad actors using technology. We’ve directly seen the integrity of our political system threatened by Russian interference and our global financial system threatened by incidents like the Equifax hack and the Bangladesh Bank heist (where criminals stole $100m). AI will dramatically escalate these incidents of cyberwarfare as rogue nations and criminal organizations use it to press their agendas, especially when it is outside our ability to assess or verify. This transition will resemble what we see when wind becomes a hurricane or a wave becomes a tsunami in terms of destructive power. Imagine an AI agent trained on something like OpenAI’s Universe platform, learning to navigate thousands of online web environments, and being tuned to press an agenda. This could unleash a locust of intelligent bot trolls onto the web in a way that could destroy the very notion of public opinion. Alternatively, imagine a bot army of phone calls from the next evolution of Lyrebird.ai with unique voices harassing the phone lines of congressmen and senators with requests for harmful policy changes. This danger, unlike the idea of robots taking over, has a strong chance of becoming a reality in the next decade.

This technology is already on the radar of the authoritarian countries of today. For example, Putin has talked about how AI leaders will rule the world. Additionally, China, as a nation, has focused on very pointedly acquiring this powerful new AI technology. The accumulation of expertise beyond normal business competition and their very large funding directed here is a major concern. This is potentially equivalent to or worse than the US being the only nation with nuclear capabilities when the Hiroshima attack was conducted. There was very little for our Japanese opponents to respond with. It is hard to say if this economic war weapon will be as binary as the nuclear bomb was, but it will be large and concentrated in a few hands and subject to little verifiability. Surreptitious efforts, given its great amplification potential, could create large power inequality.

Matters get worse if one realizes that major actors in AI development in the West, like Google, Facebook, and universities, have adopted a generally open policy publishing their technology approaches and results in scientific journals in order to share this technology more broadly. If individual state actors don’t do that, and I doubt they will, we will have a one way flow of technology from the US. AI development in certain parts of the world will additionally have huge advantages because of policies against/for data. As Andrew Ng (a Stanford professor hired by massive Chinese company, Baidu, to lead it’s AI efforts until he left to incubate his own ideas) has said, “Data is the rocket fuel for the AI engine”. So while AI progress has been frenetic recently, it will be much faster when data privacy and occasional accidents are less important in the interest of “national security.” This disregard for data privacy and one way transfer of technology will lend nationalistic countries like China and Russia a huge advantage in this generation’s space race.

AI will be much more than an economic, business, or competition issue that it is talked about today. We will need to rethink capitalism as a tool for economic efficiency because efficiency will matter less, or at the very least, disparity will matter more, but that consideration may be many decades away. The biggest concern in the next decade is that AI will dramatically worsen today’s cyber security issues and be less verifiable than nuclear technology. Nationalistic nations like China and thuggish dictators like Putin will have massively amplified clandestine power. I don’t believe we, as a society, would be willing to give up the safeguards in our society like open progress and privacy to “keep up” with other nations. I have some thoughts as to what we can do here, but this is a complex problem without obvious solutions. Maybe we limit funding of non-NATO investors in US AI companies? Maybe having the US government or NATO invest in their own AI technologies for national security? An AI white hats force? Increased efforts in Black Swan developments like quantum computing? Less risk aversion, more patience, and less backlash from society and government to the risks, biases and shortcomings of new AI technology as it grows up? Regardless of what we do, what’s clear is we need much more dialog, debate, and increased countermeasure funding; instead of generating hysteria about some far off dystopian possibility mired in uncertainties and what ifs, we need to focus on the immediate wave of danger before it hits. Not taking risks here might inadvertently be the largest risk we take.