AI building AI — the latest buzz
688 reads
688 reads

AI building AI — the latest buzz today.

by Abhishek AnandNovember 6th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

So, over the past several hours, Twitter has been set ablaze by a series of tweets mentioning an article on NY Times.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - AI building AI — the latest buzz today.
Abhishek Anand HackerNoon profile picture

“Google Researchers Are Teaching Their AI to Build Its Own, More Powerful AI”….Hmmm, Okay!

So, over the past several hours, Twitter has been set ablaze by a series of tweets mentioning an article on NY Times.

Building A.I. That Can Build A.I._The tech industry is promising everything from smartphone apps that can recognize faces to cars that can drive on their…

Soon, the news found its way to the place everyone heads over to talk about something interesting — reddit.

It is the third item on HackerNews.


It is all about achieving Singularity — in AI.

The red line is where Sheldon dies, and the blue one at which he can achieve Singularity — essentially missing it by — “This Much”

Whenever someone talks about Singularity, I feel the need to mention the episode of The Big Bang Theory where Sheldon charts out by how much time he would be missing out on achieving Singularity — or be in a place where he would be able to transfer his thoughts, knowledge and mental capacity into a machine, essentially achieving immortality.

“…by this much..”

Anyway. Tech Singularity is all the hoop-la around self-learning AIs. Creating an AI system that is able to learn on its own, thereby going into a mode of self improvement cycles, which would ultimately result in technological advances unfathomable for human minds.

On a side note, Tech Singularity is exactly what Stephen Hawking, Elon Musk etc keep on warning us about.

Earlier this year, at Google’s I/O Developer Conference, Google’s CEO Sundar Pichai highlighted Google’s AutoML project, that is automating designing deep learning processes — right to the point of choosing the right neural network architecture.

It is the same ‘trial and error’ approach machine learning experts use in their day to day life, it is just that now there exists an AI for that. One which would be able to do things much faster, at a much higher efficiency, and one that would not be inhibited by biases.

“….Not only did the results rival or beat the performance of the best human-designed architectures, but the system made some unconventional choices that researchers had previously considered inappropriate for those kinds of tasks….”

— Source : Singularityhub

While I am not sure why this is doing the rounds to this extent today, the developments being made by Google, Facebook and others on this front are noteworthy for sure. Google may have a certain lead in this drive (according to some estimates, a good chunk of the 10,000+ machine learning experts present worldwide are employed by Google).


Basic economics — demand/supply mismatch.

If you have ever tried to hire a data scientist, you would come to realise how difficult it is to find quality resources. And the ones that may fit the bill would come across to be a little bit too outside your budget estimations for the role. It is a real problem for companies of all shapes and sizes. Now, the Googles and Facebooks of the world may have deep coffers that would shield them from feeling the heat of the financial implications, but even they struggle when it comes to finding quality resources to add to the team. There is just too much competition at the very top of the food chain as well. And that is exactly where the value lies.

Mastering skills like deep learning takes years of research and countless hours of productive work, so the gap in the demand/supply equilibrium is unlikely to go away anytime soon. This is the reason why we are witnessing an ever increasing rollout of tools and systems that enable you to build AI-based programs, be it chatbots or an analytics platform.


First of all, obviously their intent is not all altruistic. They are businesses, and there is nothing wrong with businesses making strong bets based on business goals.

You add an artificial general intelligence on top of any of the products from these companies, and what you come up with is pretty remarkable. Products that are easier to use, need less and less intervention from ‘experts’, and products that would arguably be driving more value to their end consumers. Why wouldn’t these companies want to foray into this at the earliest.

But. For the time being let us ignore the impact a self-learning, self-improving, somewhat self-creating AI can have on their businesses. Let us look at it from a purely academic viewpoint, where these businesses open up systems like AutoML for everyone to play around with. How does it help them? Millions of data points and means of interaction for them to finetune and train their algorithms on. What could be a better stress test for all the work they have put up in it?


Machine learning is more than just writing an efficient algorithm that does what it is expected to do. The problem is before you come across that ‘efficient’ algorithm. There is a lot of intuition involved, the algorithm evolves and is refined over time after having processed tons of data — there is just a lot of work involved in figuring out the way ‘that just works’.

An **intelligent AI that builds AI** is basically you handing off even that part of the process to a machine. You build an algorithm with the sole purpose of analysing other algorithms, and figuring out along the way which approaches work and which do not. The goal? To eliminate a good deal of the hard work and heart ache involved in the development of basic machine learning algorithms. The holy grail may be you telling this AI what is the AI system you are looking for, and what is it supposed to be doing, and the AI building out an algorithm for you to do just that.

Is it possible to do so? Yes. Google’s AutoML has already been successful in developing an algorithm that can identify objects in images with a better degree of precision than algorithms designed by machine learning experts themselves.

We are still a little bit too far away from the stage where an AI can dole out machine learning systems for you depending on what you want, but that is the intent towards which we seem to be working towards.

Exciting times ahead.

That’s it for today; see you tomorrow!

I am Abhishek. I am here... there.... Everywhere...

Medium | Twitter | Facebook | Quora | LinkedIn | E-mail

Click here to join the mailing list.


This has been in the works for a long time now, and has received quite some media fanfare as well. It is just that

  1. May 31, 2017 — Google’s AI-Building AI Is a Step Toward Self-Improving AI
  2. May 19, 2017 — Google Researchers Are Teaching Their AI to Build Its Own, More Powerful AI
  3. Jan 18, 2017 — AI Software Learns to Make AI Software
  4. May 6, 2016 — Building AI is hard — So Facebook is building AI that builds AI
  5. Jan 27, 2013 — Using Artificial Intelligence to Write Self-Modifying/Improving Programs