Neural networks are rapidly improving thanks to advancements in computational power.
As the practical applications of the technology multiply, we will see more and more organisations using their own machine learning programs.
The research and development of neural networks is flourishing thanks to recent advancements in computational power, the discovery of new algorithms, and an increase in labelled data. Before the current explosion of activity in the space, the practical applications of neural networks were limited.
Much of the recent research has allowed for broad application, the heavy computational requirements for machine learning models still restrain it from truly entering the mainstream. Now, emerging algorithms are on the cusp of pushing neural networks into more conventional applications through exponentially increased efficiency.
Neural networks are a prominent focal point in the current state of computer science research. They are inspired by complex human biology, which, for all but the most niche use cases, still outperforms computers on most conceivable scales.
Computers are excellent at storing information and processing at speed, while humans are more adept at efficient use of the limited computational power that they have. A computer could perform millions of calculations per second, which no human can hope to match. Where humans possess their advantage is efficiency, being more efficient than computers by a factor of many 10s of thousands.
What computers lack in algorithmic complexity, they make up for in sheer processing power, analysing information at a rate that is continually developing.
That computational power comes with a catch: despite the costs of computational power decreasing exponentially, machine learning still remains an expensive affair — outside the reach of most individuals, businesses and researchers, who must rely on expensive third-party services to perform experiments in a space that could have staggering ramifications in myriad verticals.
For example, simple chatbots could cost anywhere in the range of a few thousand dollars to upwards of $10,000, depending on the complexity.
To overcome this barrier, scientists have been investigating various techniques to reduce the cost and time associated with machine and deep learning application.
The field is a mix of both software and hardware considerations. More efficient algorithms and better-designed hardware are both priorities, but the human development of the latter is enormously labour-intensive and time-consuming. This has spurred researchers to create design automation solutions for the field.
Advancements are being made on both the software and hardware side. Currently, the most common technique in the implementation of neural networks is the Neural Architecture Search (NAS), which, though effective in designing neural networks, is computationally expensive. The NAS technique can be considered something of a basic step towards automated machine learning.
“We want to enable both AI experts and nonexperts to efficiently design neural network architectures with a push-button solution that runs fast on specific hardware”
MIT, where much of the research in the field has taken place, has published a paper that shows a hugely more efficient NAS algorithm that can learn Convolutional Neural Networks (CNN) for specific hardware platforms.
The researchers who worked on the paper succeeded at increasing efficiency by “deleting unnecessary neural network design components” and by focusing on specific hardware platforms, including mobile devices. Tests indicate that these neural networks were almost twice as fast as traditional models.
Co-author of the paper, Song Han, assistant professor at MIT’s Microsystems Technology Laboratory, has said that the goal is to “democratise AI”.
“We want to enable both AI experts and nonexperts to efficiently design neural network architectures with a push-button solution that runs fast on specific hardware,” he says. “The aim is to offload the repetitive and tedious work that comes with designing and refining neural network architectures.”
Image: Chelsea Turner, MIT
Other techniques have also been proposed. As opposed to being executed in resource-heavy controlled environments, machine learning algorithms can be reduced to run on specially designed hardware that utilises lower levels of power.
Researchers from the University of British Columbia have shown that Field-Programmable Gate Arrays (FPGA) are faster and more power-efficient in the implementation of machine learning applications. In addition to making machine learning more affordable and less time-consuming with customised hardware, FPGAs can make Deep Neural Networks (DNN) more accessible to those with lesser technical expertise.
FPGAs are used in conjunction with the High-Level Synthesis (HLS) tool to “automatically design hardware”, eliminating the need to specifically design hardware for trialling machine learning inference solutions, and consequently achieve faster implementation of applications for a variety of use cases.
Other researchers have considered FPGAs for the specific DNN subset that is the CNN, a technique that is known for its application in analysing images, which itself has taken inspiration from the visual cortex of animals. This method also refers to the use of HLS and FPGA.
To further demonstrate the diversity of specific use cases, some research has looked into the implementation of DNN to execute automated design with respect to engineering tasks.
Still, there is a long road ahead for the field of machine learning research. Neural networks and machine learning researcher Robert Aschenbrenner points to an upcoming shift in the technology and emphasises how machine learning agents will improve their performance and algorithms.
“Today, automation tools are largely isolated and segmented into their own fiefdoms,” Aschenbrenner said. “A website chatbot doesn’t typically interact with a customer service employee unless it is programmed to hand off a conversation if certain conditions are met. The chatbot just follows its programming, never altering course unless it’s ordered to do so.
“Rather than determining a process that we want to automate, a machine learning agent will observe the way we work, collecting and mining historical data to determine where opportunities for automation lie. The AI tool will then hypothesise a solution in the form of an automated process change and simulate how those changes will improve productivity or lead to better business outcomes.”
As promising as that sounds, there is much work to be done in training an algorithm to learn like a human or any animal does.
Aschenbrenner lists five major areas where humans still have an advantage over machines: vision, unsupervised/reinforced learning, explainable models, reasoning and memory, and rapid learning.
While AI has made advancements with respect to these points, humans still have a far greater capacity at learning things quickly and without the need for explicitly labelled data — ‘putting two and two together’, as it were.
The capacity to reason and find connections between seemingly disparate ideas is something humans possess to a high degree, while the ability to be completely independent and achieve emergent learning still eludes machines.
While there is much activity in the field of neural networks, the fundamental broadening of the use of machine learning algorithms means that its application could extend far beyond the somewhat limited use cases it currently operates in.
Artificial Intelligence (AI) is proliferating and is seeing practical deployment, but the expectation of AI to become a ubiquitous phenomenon will be dependent on rapidly designed hardware and software solutions that are accompanied by the aforementioned resource benefits.
Source: McKinsey
Optimised algorithms and affordable solutions are expected to ‘democratise AI’, as MIT has described it, putting large-scale machine learning techniques in the hands of individuals and groups that lack the resources to run massive computers farms for the purpose.
While research may still be early in this field, the newly proposed solutions for design automation show much promise. This is accompanied by decreasing costs of computer hardware, and the introduction of interoperable technology like cloud computing, which together may expedite the arrival of more mainstream utilisation of machine learning.
The increased accessibility to sophisticated algorithms and tools can enhance education, medical care and business performance.
Additionally, businesses can reduce operational costs by having AI handle tedious tasks, thereby allowing human resources to be better utilised on more critical tasks.
It is a matter of when, not if, these more powerful software tools become more readily usable.
The biggest knowledge gap between human brains and advanced machines is that of emotion. The ability to feel empathy or to even be aware of one’s own existence is what separates the human consciousness from artificial intelligence which, though more powerful when computing simple tasks, is currently nowhere near being able to truly process emotion or sentiment. It is perhaps too early to discuss the ethical issues in the never-ending quest to create truly intelligent machines, but it’s been part of the conversation for some time.
This fundamental and seemingly immovable gap in capability has been the subject of science-fiction stories for as long as machines have existed. Poetic musings on the possibility of true artificial sentience often lead to tales questioning the morality of creating such a machine. Take Blade Runner, for example, a 1982 movie based on the integration of capable humanoid machines into society, with all the ethical quandaries this throws up. Neural networks are only another step in the long journey towards real artificial intelligence, it is just important that we know what to do when we achieve it.
This article was written by the Research Institute and originally appeared in the Binary District Journal.
The field of emerging technologies is by no means bereft of ideas or inspiration; however, these alone do not equate to innovation neither do they drive technological advancement — quality Research and Development does. The Research Institute takes a practical approach to technological development and seeks to redress the balance between ideas and viable solutions.
Illustrations by Kseniya Forbender