Google has finally released an early version of AutoML, a service that, according to many within the company, will change the way we do Deep Learning in its entirety. Google CEO, Sundar Pichai, writes in his announcement:\n\n> We hope AutoML will take an ability that a few PhDs have today and will make it possible in three to five years for hundreds of thousands of developers to design new neural nets for their particular needs.\n\nGoogle’s Head of AI, Jeff Dean, goes even further in his keynote at the [TensorFlow](https://www.youtube.com/watch?v=kSa3UObNS6o) Dev summit, suggesting that 100x computation could replace all ML expertise. This is a bold vision — and a false one.\n\nGoogle’s AutoML is a glaring example of hype over product. Although the field of AutoML has existed for many years now, Google co-opted the term to refer specifically to its neural architecture search and surrounding suite of products. Neural architecture search essentially creates a dataset with various unique, highly specialized architectures; this search is incredibly computationally intensive and is used to find a singular best model for that specific data. Once that specific model has been found, it is relatively worthless to all the other data except the exact data it was trained on as it has been, at huge computational cost, tuned for that specific data and that specific data only.\n\nThat is not to say neural architecture search is entirely worthless — there have been some incredibly interesting architectures, including during a monumental CIFAR-10 attempt by Google itself, that were discovered by using the technique. **However, to say that every machine learning problem should first be tackled by NAS is a farce.**\n\nThe simple truth is that the vast majority of ML problems can be solved by preexisting architecture, at orders of magnitude less computational cost. Even when NAS may be necessary, there are more [efficient](https://arxiv.org/pdf/1802.03268.pdf) methods then Google’s preferred search technique.\n\nIt is perhaps useful then to understand why Google has invested so heavily in promoting AutoML. I believe the following to be true:\n\n1. **Google has a vested interest in popularizing techniques that support the lie that the solution to more effective ML is more computing power because they are the providers of that computing power.** Some of the most novel ML solutions have arisen from the most heavily computing-constrained environments. And at $20/hr of training time, they’re making a killing.\n2. **Democratizing ML so that it is not just the province of PHDs is an incredibly hot idea right now.** The ability to sell a turn-key solution to “AI for your business” to large corporations that are otherwise unable to join the AI wave is an easy sell. Even if their business would not benefit from ML, FOMO (fear of missing out) is a hell of a force.\n3. **Google knows that their marketing efforts can succeed.** Because AI is seen as entirely unapproachable and centralized, journalists are ready for any scoop that appears to democratize the technology.\n\nHowever, Google’s AutoML does anything but democratize AI. It instead only serves to appoint Google the holder of the keys to the kingdom of the AI age.\n\n[**Will Manidis**](http://WillManidis.com) is a partner at DormRoomFund and a researcher working in interoperability and transfer learning for large genomic datasets at Olin College. You can find him writing about issues on the intersection of health, philosophy, and technology [here](https://twitter.com/WillManidis).