paint-brush
Overview of Artificial Neural Networks and its Applicationsby@hackernoon-archives
5,733 reads
5,733 reads

Overview of Artificial Neural Networks and its Applications

by HackerNoon ArchivesJuly 17th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The term ‘Neural’ is derived from the human (animal) nervous system’s basic functional unit ‘neuron’ or nerve cells which are present in the brain and other parts of the human (animal) body.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Overview of Artificial Neural Networks and its Applications
HackerNoon Archives HackerNoon profile picture

What is Neural Network?

The term ‘Neural’ is derived from the human (animal) nervous system’s basic functional unit ‘neuron’ or nerve cells which are present in the brain and other parts of the human (animal) body.

Structure of Neurons in Brain

The typical nerve cell of human brain comprises of four parts -

Image Source — cs231n.github.io

Dendrite — It receives signals from other neurons.

Soma (cell body) — It sums all the incoming signals to generate input.

Axon — When the sum reaches a threshold value, neuron fires and the signal travels down the axon to the other neurons.

Synapses — The point of interconnection of one neuron with other neurons. The amount of signal transmitted depend upon the strength (synaptic weights) of the connections.

The connections can be inhibitory (decreasing strength) or excitatory (increasing strength) in nature.

So, neural network, in general, is a highly interconnected network of billions of neuron with trillion of interconnections between them.

How is Brain Different from Computers?

What is Artificial Neural Network?

Artificial Neural Networks are the biologically inspired simulations performed on the computer to perform certain specific tasks like clustering, classification, pattern recognition etc.

Artificial Neural Networks, in general — is a biologically inspired network of artificial neurons configured to perform specific tasks.

Similarity of ANN with Biological Neural Network

Neural networks resemble the human brain in the following two ways -

  • A neural network acquires knowledge through learning.
  • A neural network’s knowledge is stored within inter-neuron connection strengths known as synaptic weights.

Analogy of Artificial Neural Network With Biological Neural Network

The dendrites in biological neural network is analogous to the weighted inputs based on their synaptic interconnection in artificial neural network.

Cell body is analogous to the artificial neuron unit in artificial neural network which also comprises of summation and threshold unit.

Axon carry output that is analogous to the output unit in case of artificial neural network. So, ANN are modelled using the working of basic biological neurons.

How Does Artificial Neural Network Works?

Artificial neural networks can be viewed as weighted directed graphs in which artificial neurons are nodes and directed edges with weights are connections between neuron outputs and neuron inputs.

The Artificial Neural Network receives input from the external world in the form of pattern and image in vector form. These inputs are mathematically designated by the notation x(n) for n number of inputs.

Each input is multiplied by its corresponding weights. Weights are the information used by the neural network to solve a problem. Typically weight represents the strength of the interconnection between neurons inside the neural network.

The weighted inputs are all summed up inside computing unit (artificial neuron). In case the weighted sum is zero, bias is added to make the output not- zero or to scale up the system response. Bias has the weight and input always equal to ‘1’.

The sum corresponds to any numerical value ranging from 0 to infinity. In order to limit the response to arrive at desired value, the threshold value is set up. For this, the sum is passed through activation function.

The activation function is set of the transfer function used to get desired output. There are linear as well as the non-linear activation function.

Some of the commonly used activation function are — binary, sigmoidal (linear) and tan hyperbolic sigmoidal functions(nonlinear).

Binary — The output has only two values either 0 and 1. For this, the threshold value is set up. If the net weighted input is greater than 1, an output is assumed 1 otherwise zero.

Sigmoidal Hyperbolic — This function has ‘S’ shaped curve. Here tan hyperbolic function is used to approximate output from net input. The function is defined as — f (x) = (1/1+ exp(-𝝈x)) where 𝝈 — steepness parameter.

You May Also Love To Read — Overview of Artificial Intelligence and Role of Natural Language Processing in Big Data

Architecture of Artificial Neural Networks

A typical neural network contains a large number of artificial neurons called units arranged in a series of layers.

In typical artificial neural network, comprises different layers -

  • Input layer — It contains those units (artificial neurons) which receive input from the outside world on which network will learn, recognize about or otherwise process.
  • Output layer — It contains units that respond to the information about how it’s learned any task.
  • Hidden layer — These units are in between input and output layers. The job of hidden layer is to transform the input into something that output unit can use in some way.

Most neural networks are fully connected that means to say each hidden neuron is fully connected to the every neuron in its previous layer(input) and to the next layer (output) layer.

Popular Neural Network Architectures

Perceptron — Neural Network having two input units and one output units with no hidden layers. These are also known as ‘single layer perceptrons.

Radial Basis Function Network — These networks are similar to the feed forward neural network except radial basis function is used as activation function of these neurons.

Multilayer Perceptron — These networks use more than one hidden layer of neurons, unlike single layer perceptron. These are also known as deep feedforward neural networks.

Recurrent Neural Network — Type of neural network in which hidden layer neurons has self-connections. Recurrent neural networks possess memory. At any instance, hidden layer neuron receives activation from the lower layer as well as it previous activation value.

Long /Short Term Memory Network (LSTM) — Type of neural network in which memory cell is incorporated inside hidden layer neurons is called LSTM network.

Hopfield Network — A fully interconnected network of neurons in which each neuron is connected to every other neuron. The network is trained with input pattern by setting a value of neurons to the desired pattern. Then its weights are computed. The weights are not changed. Once trained for one or more patterns, the network will converge to the learned patterns. It is different from other neural networks.

Boltzmann Machine Network — These networks are similar to Hopfield network except some neurons are input, while other are hidden in nature. The weights are initialized randomly and learn through back propagation algorithm.

Convolutional Neural Network — Get a complete overview of Convolutional Neural Networks through our blog Log Analytics with Machine Learning and Deep Learning.

Other types of Neural Network

Modular Neural Network — It is the combined structure of different types of the neural network like multilayer perceptron, Hopfield network, recurrent neural network etc which are incorporated as a single module into the network to perform independent subtask of whole complete neural networks.

Physical Neural Network — In this type of artificial neural network, electrically adjustable resistance material is used to emulate the function of synapse instead of software simulations performed in the neural network.

Learning in Artificial Neural Networks

The neural network learns by adjusting its weights and bias (threshold) iteratively to yield desired output. These are also called free parameters. For learning to take place, the neural network is trained first. The training is performed using defined set of rules also known as the learning algorithm.

Popular Learning Algorithms used in Neural Network

Gradient Descent — This is the simplest training algorithm used in case of supervised training model. In case, the actual output is different from target output, the difference or error is find out. The gradient descent algorithm changes the weights of the network in such a manner to minimize this error.

Back propagation — It is an extension of gradient based delta learning rule. Here, after finding an error (the difference between desired and target), the error is propagated backward from output layer to the input layer via hidden layer. It is used in case of multilayer neural network.

Other learning algorithms

  • Hebb Rule
  • Self — Organizing Kohonen Rule
  • Hopfield law
  • LMS algorithm (Least Mean Square)
  • Competitive Learning

Types of Learning in Neural Network

Supervised Learning — In supervised learning, the training data is input to the network, and the desired output is known weights are adjusted until output yields desired value.

Unsupervised Learning — The input data is used to train the network whose output is known. The network classifies the input data and adjusts the weight by feature extraction in input data.

Reinforcement Learning — Here the value of the output is unknown, but the network provides the feedback whether the output is right or wrong. It is semi-supervised learning.

Offline Learning — The adjustment of the weight vector and threshold is done only after all the training set is presented to the network.it is also called batch learning.

Online Learning — The adjustment of the weight and threshold is done after presenting each training sample to the network.

Learning Data Sets in ANN

Training set: A set of examples used for learning, that is to fit the parameters [i.e. weights] of the network. One Epoch comprises of one full training cycle on the training set.

Validation set: A set of examples used to tune the parameters [i.e. architecture] of the network. For example to choose the number of hidden units in a neural network.

Test set: A set of examples used only to assess the performance [generalization] of a fully specified network or to apply successfully in predicting output whose input is known.

How does Learning happen in Neural Network?

Learning occurs when the weights inside the network get updated after many iterations.

For example — Suppose we have inputs in the form of patterns for two different class of patterns — I & 0 as shown and b -bias and y as desired output.

We want to classify input patterns into either pattern ‘I’ & ‘O’.

Following are the steps performed:

  • 9 inputs from x1 — x9 along with bias b (input having weight value 1) is fed to the network for the first pattern.
  • Initially, weights are initialized to zero.
  • Then weights are updated for each neuron using the formulae: Δ wi = xi y for i = 1 to 9 (Hebb’s Rule)
  • Finally, new weights are found using the formulae:
  • wi(new) = wi(old) + Δwi
  • Wi(new) = [111–11–1 1111]
  • The second pattern is input to the network. This time, weights are not initialized to zero. The initial weights used here are the final weights obtained after presenting the first pattern. By doing so, the network
  • The steps from 1–4 are repeated for second inputs.
  • The new weights are Wi(new) = [0 0 0 -2 -2 -2 000]

So, these weights correspond to the learning ability of network to classify the input patterns successfully.

Four Different Uses of Neural Networks

  • Classification — A neural network can be trained to classify given pattern or data set into predefined class. It uses feedforward networks.
  • Prediction — A neural network can be trained to produce outputs that are expected from given input. Eg: — Stock market prediction.
  • Clustering — The Neural network can be used to identify a special feature of the data and classify them into different categories without any prior knowledge of the data.

Following networks are used for clustering -

  • Competitive networks
  • Adaptive Resonance Theory Networks
  • Kohonen Self-Organizing Maps.
  • Association — A neural network can be trained to remember the certain pattern, so that when the noise pattern is presented to the network, the network associates it with the closest one in the memory or discard it. Eg — Hopfield Networks which performs recognition, classification, and clustering etc.

Neural Network for Pattern Recognition

Pattern recognition is the study of how machines can observe the environment, learn to distinguish patterns of interest from their background, and make sound and reasonable decisions about the categories of the patterns.

Some examples of the pattern are — fingerprint image, a handwritten word, human face or speech signal.

Given an input pattern, its recognition involves the following task -

  • Supervised classification — Given input pattern is identified as the member of a predefined class.
  • Unsupervised classification — Pattern is assigned to a hitherto unknown class.

So, the recognition problem here is essentially classification or categorized task.

The design of pattern recognition systems usually involve the following three aspects-

  • Data acquisition and preprocessing
  • Data representation
  • Decision Making

Approaches Used For Pattern Recognition

  • Template Matching
  • Statistical
  • Syntactic Matching
  • Artificial Neural Networks

Following neural network architectures used for pattern recognition -

  • Multilayer Perceptron
  • Kohonen SOM (Self Organizing Map)
  • Radial Basis Function Network (RBF)

Neural Network for Machine Learning

  • Multilayer Perceptron (supervised classification)
  • Back Propagation Network (supervised classification)
  • Hopfield Network (for pattern association)
  • Deep Neural Networks (unsupervised clustering)

Neural Network for Deep Learning

Following neural network, architectures are used in deep learning -

  • Feed-forward neural networks
  • Recurrent neural network
  • Multi-layer perceptrons (MLP)
  • Convolutional neural networks

Continue Reading The Full Article, At — XenonStack.com/Blog