paint-brush
How Do Deep Neural Networks Work?by@softarex
285 reads

How Do Deep Neural Networks Work?

by SoftarexJuly 26th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

An Artificial Neural Network (ANN) is a technology for pattern recognition and the passage of input through various layers of simulated neural connections. It was inspired by the human brain and the way it works. A deep learning system is self-teaching, learning as it goes by filtering information through multiple hidden layers, in a similar way to humans. A neural network is a directed graph of nodes connected by synaptic and activation connections, which is characterized by the following properties: Each neuron is represented by a set of linear synaptic connections, and, possibly, by a nonlinear activation connection.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How Do Deep Neural Networks Work?
Softarex HackerNoon profile picture

Every day we are facing AI and neural network in some ways: from common phone use through face detection, speech or image recognition to more sophisticated — self-driving cars, gene-disease predictions, etc. We think it is time to finally sort out what AI consists of, what neural network is  and how it works.

In the beginning was the Artificial Neural Network

In general terms, an Artificial Neural Network (ANN) is a technology for pattern recognition and the passage of input through various layers of simulated neural connections. It was inspired by the human brain and the way it works.

In its simplest form, an ANN can have only three layers of neurons: the input layer (where the data enters the system), the hidden layer (where the information is processed) and the output layer (where the system decides what to do based on the data).

An ANN that is made up of more than three layers – i.e. an input layer, an output layer, and multiple hidden layers – is called a ‘deep neural network’, and this is what underpins deep learning. A deep learning system is self-teaching, learning as it goes by filtering information through multiple hidden layers, in a similar way to humans.

Deep Neural Networks (DNN) is a neural network with a certain level of complexity — more than two layers. DNN combines simple mathematical operations in layers, which allows expressing more complex dependencies through them. And as the depths increase, complexity and abstraction level increase too, i. e. the data is processed in more complex ways.

How Neural Network Works

A neural network is a directed graph of nodes connected by synaptic and activation connections, which is characterized by the following properties:Each neuron is represented by a set of linear synaptic connections, and, possibly, by a nonlinear activation connection.Neuron synaptic connections are used to weigh the corresponding input signals.The weighted sum of the input signals determines the induced local field of each particular neuron.Activation links modify the induced local field of the neuron, creating an output signal.The neural network is trained on examples, making the input-output correspondence table for a specific task defined by data.

From a mathematical point of view, learning neural networks is a multiparameter problem of nonlinear optimization.

The signal propagates from the input layer to the output layer of the neural network and collides with parameterized transformations. Deep learning algorithms are contrasted to shallow learning algorithms by the number of these transformations. It is believed that deep learning is characterized by several non-linear layers (> 2).

Thus, deep learning of neural networks is machine learning algorithms for modeling high-level abstractions using numerous non-linear transformations.

Deep training solves the central problem of teaching performances. It introduces representations that are expressed in terms of simpler representations obtained at lower levels. Thus, deep learning allows a computer to build complex concepts from simpler ones. A typical example is a deep direct distribution network, or a multilayer perceptron (MLP).

It wasn't smooth at first

For a long time in neural networks application there were problems with the DNN training. First of all, it was determined by the problem of vanishing gradients and exploding gradients. These problems were solved after the following innovations:Increasing the size of data sets to several TB.Increasing the size of the models.The use of massively parallel computing with GPGPU for training and increasing hardware performance.The use of piecewise linear functions as a nonlinear connection, for example, rectified linear units (ReLU).

Gradient descent algorithms with adaptive learning speed (AdaDelta, AdaGrad, RMSPror, Adam).Development of regularization methods in neural networks:
Convolutional networks, where a priori knowledge of the input data is determined by regularization (spatial image relations)

Dropout

Normalization of mini-batchThe method of constructing the architecture Network-In-Network.Dimensionality reduction via 1x1 convolutions in Convolution networks.Development of residual learning method (deep residual learning)

All this made it possible in the end to build neural networks with sizes greater than 150 layers — now the size is unlimited and can reach thousands of layers.

Updates are coming

Based on the general definition of DNN, they can be used to solve any issues where artificial neural networks were previously used. However, DNN shows significantly better results and opens up more opportunities.

Using DNN methods in Computer Vision it is possible to create an application that will analyze different sports games and give detailed statistics about a player's performance.

All the analytics will be made via DNN video recognition algorithms. Such a system is capable to process and transform data into required stat: players/ball speed, expected goals, successful entries, failed passes, and etc. Recognizing both individual player movements with a ball and a whole team performance is not a problem thanks to DNN. It can even recognize players by numbers on their jerseys.

Having a good coach under a team’s belt is really good, but when they are empowered with such an analytical system — isn’t it a double-threat, huh?

Or it is possible to develop a system for manufacturing needs that will define fragments in a video stream with required objects to gather the necessary information. For example, such a system can:Monitor and control different moving parts of assembly lines in real-time;Collect information about particular coordinates of selected devices parts from a camera video stream;Сheck the state of any mechanical system, people movements, and gestures in real-time;Measure the coordinates of the necessary fragments or parts of the required machines or use DNN to process information from the video streams.

These are just brief examples of DNN capabilities. If we dig deeper, there probably wouldn't be any boundaries for them.

We really mean to learn

As DNN “grows stronger” it soon will be almost at any sphere of our life, we suppose. And since modern technologies developed in the way to understand humans and their needs, it's a good point to learn and understand how these technologies work. As Immortal Technique once said — "you never know" (Hi, Skynet).

Keen to learn about some more DNN use examples? We have them! Check out the system which remotely measures the mass of homogeneous products in real-time. Or the real example of a DNN use in CV — the system for valuing sports players' performance and helping them to improve their trainings.

We are always eager to share our best practices and wide open to learn something new, so if you have any questions or ideas – fill free to write to us. Let’s develop the world together!

Previously published at http://softarex.com/blog/how-deep-is-your-network-figuring-out-how-deep-neural-networks-work/