AI Facts Every Dev Should Know: Artificial intelligence is older than you, probablyby@silvie
2,097 reads
2,097 reads

AI Facts Every Dev Should Know: Artificial intelligence is older than you, probably

by Silvie October 18th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The first recorded use of the term artificial intelligence was made by John McCarthy, an American computer scientist and one of the discipline's founders. The so-called AI winter began in the 1970s, and AI eventually reached its limits as substantial funding was put on hold. The ice was finally broken by ImageNET, a database project that stored 15 million images led by Stanford's Fei Fei Li in 2009. There are currently only 22,000 people in the world with the expertise needed to create machine learning systems.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - AI Facts Every Dev Should Know: Artificial intelligence is older than you, probably
Silvie  HackerNoon profile picture

The hype around AI is growing rapidly, as most research companies predict AI will take on an increasingly important role in the future. 

While business leaders are very interested in leveraging machine learning technology, there’s a talent shortage standing in the way. 

It turns out that there are very few developers that have the skills needed to spearhead serious new AI projects. This means that developers who can acquire these skills will be highly in demand. 

With all this in mind, let’s take a look at several facts about AI every developer should know before changing their focus to machine learning, artificial intelligence, and—while we’re at it—deep learning and neural networks.

1. Artificial intelligence is older than you, probably. 

The first recorded use of the term artificial intelligence was made by John McCarthy, an American computer scientist and one of the discipline's founders. Spending most of his academic career at Stanford, he invented Lisp in the late 1950s. Based on the lambda calculus, Lisp soon became the programming language of choice for AI applications after its publication in 1960. 

Still, creating AI departments at Stanford and MIT didn't advance the field as much as the founders had imagined. In large part, this is because scientists encountered a myriad of issues, including limited computer power (i.e., the memory or processing speed needed to accomplish anything truly useful), intractability, the combinatorial explosion, lack of databases, and lack of the common sense knowledge and reasoning needed to train algorithms effectively. 

The so-called AI winter began in the 1970s, and AI eventually reached its limits as substantial funding was put on hold. It was only in the 2000s that computational power and data became widely available. The ice was finally broken by ImageNET, a database project that stored 15 million images led by Stanford's Fei Fei Li in 2009. At the same time, data storage rates became affordable, setting the stage for more AI investments.

2. The talent pool is shallow.

Talent is in short supply in the A.I. industry, with various reports indicating that the worldwide market is seeking to fill millions of roles. Due to a widespread lack of education on AI skills and topics, there’s a bottleneck in delivering highly trained individuals. In fact, Element AI, a Montreal-based startup, estimated fewer than 22,000 people in the world with the expertise needed to create machine learning systems.  

What’s more, another study made by the Chinese Tencent Research Institute estimates that there are currently 300,000 AI researchers and practitioners in the world today, out of which about 100,000 are still studying. Tencent claims the United States is far ahead when it comes to developing this talent, being home to more than 1,000 universities of the 2,600 schools teaching machine learning and related subjects in the world. 

The same report claims the U.S. is also a leading nation when it comes to the number of startups developing AI technologies. Interestingly enough, more and more academic conferences turn into playgrounds for corporate recruiters, while entire AI research departments from well-known universities are transferred to privately held companies deploying AI. 

3. AI Engineers get paid very well

Scarcity in any job market equates to higher salaries, and AI is no different. For example, DeepMind, acquired by Google for a reported $650 million in 2014, spends $138 million on 400 employees. The staff costs were researched by The New York Times, which had a look at the company's recently released annual financial accounts in the U.K. This translates to base salaries of between $300,000 and $500,000 a year. 

Based on's analysis, the median salary for data scientists, senior data scientists, artificial intelligence consultants, and machine learning managers was $127,000 in 2019.

Over the last four years, the demand for AI talent has increased by 74%, while technology and financial service companies are currently absorbing 60% of AI talent.

4. AI/ML Professionals need to possess a lot of skills

There are two job roles for every AI professional today, and similar growth is expected to be seen in the future. Currently, the three most in-demand AI positions on the market are data scientists and algorithm developers, machine learning engineers, and deep learning engineers. 

According to the job site Indeed, the main skills and tools software developers need to be proficient on AI projects include math, algebra, statistics, big data, data mining, data science, machine learning, cognitive computing, natural language processing (NLP), Hadoop, Spark, and many others.

The most frequent programming languages AI developers use are Phyton, C++, Java, LISP, and Prolog. Still, qualifying job seekers must also be experienced working with open source development environments. For example, proficiency with Spark, MATLAB, and Hadoop is one of the most in-demand skills

5. The hype around AI is worth it.

In 2018, Gartner predicted that 80% of emerging technologies would have AI foundations within three years. What’s more, the research firm Markets and Markets expects that the AI market will grow to a $190 billion industry by 2025. Beyond that, Accenture predicts that the impact of AI technologies on business will boost labor productivity by up to 40%. Also, according to IDC, the AI use cases that saw the most investment in 2019 were automated customer service agents ($4.5 billion worldwide), sales process recommendation and automation ($2.7 billion), and automated threat intelligence and prevention systems ($2.7 billion).

Add it all up, and the hype surrounding AI is worth it.

6. AI has all kinds of implications

Before wondering whether AI will replace software developers, let's take a look at what AI can actually do. 

The industries and use cases where AI can be deployed have surged in the past few years. 

In December 2018, the New York auction house Christie's sold Portrait of Edmond de Belamy, an algorithm-generated print in the style of 19th-century European portraiture, for $432,500. Various AI-generated works of art are now frequently exhibited; one example is the "Faceless Portraits Transcending Time" collection in New York. Dr. Ahmed Elgammal and his creation, the AICAM AI, benefit from the first solo gallery exhibit devoted to an AI artist. As Andy Warhol once said, art is what you can get away with. 

The frenzy around AI-generated art is also touching the music industry. While continuing to read these words, play this piece generated in ASCII using roughly 500 megabytes worth of famous guitar tabs of mostly classical and rock music. It's called Recurrence, and—if it's not contemporary enough for your taste—please note this "record" is already five years old.

With a more substantial societal impact, AI tools are also being used to solve medical issues. AI is also being used in medical research to identify, prevent, and cure disorders and diseases is more appealing. These applications are projected to create $150 billion in annual savings for the healthcare economy by 2026. 

AI-based typing patterns matching algorithms can also verify users' identity based solely on their typing behavior. In 2016, TypingDNA’s technology was launched to analyze how humans interact with keyboards to provide accurate authentication. The breakthrough discovery here relies on the fact that humans are all different and behave in a distinctive way. The demo of how it works can turn into an addictive challenge game among your friends trying to "fool" the system and replicate each other's typing behavior. 

Further, Google's Deep Learning machine learning program is accurate 89 percent of the time in detecting breast cancer compared to a human pathologist, who is just 73 percent accurate. This is why machine learning and AI are regarded as healthcare’s new nervous system.

Finally,  AI is also very smart, shedding light on its future capabilities. For example, AlphaGo Zero, a Google Deep mind project, was able to achieve superhuman-level performance, flawlessly beating champion predecessor, AlphaGo, the first AI to defeat Ke Jie, the world's top-ranked player in the ancient Chinese strategy game Go. Interestingly, AlphaGo Zero taught itself how to play the game, given only the basic rules.

7. AI won’t replace human beings but their jobs.

Twenty-five years ago, Jeff Dean started working on a "brain" that mimicked the neural networks to analyze information and learn. But its capabilities were limited. It was only in 2012 that neural networks were successfully used for machine learning, memory, perception, and symbol processing.

Geoff Hinton marked a new era when introducing neural networks that could learn tasks mostly on their own by analyzing vast amounts of data. Both Dean and Hinton are now part of Google's AI research teams. In 2017, Google announced its project AutoML successfully taught itself to program machine learning software on its own. Completing basic programming tasks, AutoML also marked the popularization of a new fear: Because of their ability to learn on their own, will machines replace humans? 

Welcome to this century's agnostophobia.

Unlike the Narrow/Weak AI,  specified to handle singular or limited tasks humans can perform as well, General or Strong AI poses fears towards its capabilities once out of control. Currently, AI is used mostly to assist developers, and it will probably continue to grow its role in augmenting human teams' capabilities. We see it everywhere around us—in tools to help write documentation, test code, and even identify bugs and address them. 

Open AI and it’s recent Generative Pre-trained Transformer 3 (GPT-3), an autoregressive language model with 175 billion parameters, achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation. This means it has capabilities to generate news articles in which human evaluators have difficulty distinguishing from articles written by humans while researchers claim GPT-3 has the "potential to advance both the beneficial and harmful applications of language models."

Researchers at MIT created a program that automatically fixed software bugs by replacing faulty code lines with working lines from other programs. Here are some more tools that help in the process of building software products: DeepCode, Synopsys Logojoy, and UIzard.

8. How do developers look at AI and its potential threats?

If you fear that AI will eventually replace your role, you're not alone. That’s how the majority of developers around the world feel. 

According to Evans Data, when asked to identify the most worrisome thing in their careers, the largest plurality of software developers cited this: "I and my development efforts are replaced by artificial intelligence."

On a positive note, Stack Overflow research showed that 70% of respondents feel more excited about AI's possibilities instead of worrying about its potential dangers. And most developers are eagerly looking forward to the new possibilities automation brings to the table.

Just like the industrial revolution shifted humankind into developing new skills while leaving agricultural labor behind, so will intelligent robots. In fact, McKinsey predicts AI could replace 30% of the human workforce globally by 2030. According to AI technology statistics, robotics could replace about 800 million jobs, making about 30% of occupations extinct. 

With this significant shift, nearly 400 million people will have to adapt and change careers. Forrester predicts cognitive technologies—such as robots, A.I., machine learning, and automation—will create 9% of new U.S. jobs by 2025. These new jobs include robot monitoring professionals, data scientists, automation specialists, and content curators.

9. It’s easy to start learning or teaching

Since the skilled workers needed to build advanced AI software are still scarce, companies like Facebook and Google have prepared educational programs designed to get anyone on board—no matter their level of expertise. If you are interested in online courses to grasp the basics of A.I., check these machine learning courses from Stanford, MIT,  and Columbia University,  or dive into the depths of deep learning at Nvidia's Deep Learning Institute

For more information, you can also read this book, check out these popular open-source AI tools, and browse this list of popular AI projects. This will help you build the technology of the future and solve real-world problems.

And if you’re a teacher looking to introduce your students to AI, check out Tom Vander Ark’s compelling guide on how to teach artificial intelligence.

The graphics in this article can be downloaded here.