Photo on Unsplash
Photo on Unsplash
Deep learning is also a large part of machine learning methods based on learning data presentations — “as opposed to task-specific algorithms.”
Photo on @Google
It’s a framework to perform computation very efficiently, and it can tap into the GPU (Graphics Processor Unit) in order too speed it up even further. This will make a huge effect as we shall see shortly. TensorFlow can be controlled by a simple Python API, which we will be using in this Article.
When a native computation is done in many programming languages, it is usually executed directly. If you type a = 3*4 + 2
in a Python console, you will immediately have the result. Running a number of mathematical computations like this in an IDE also allows you to set breakpoints, stop the execution and see intermediate results. This is not possible in TensorFlow, what you actually do is specifying the computations that will be done.
This is accomplished by creating a computational graph, which takes multidimensional matrices called “Tensors” and does computations on them. Each node in the graph denotes an operation. When creating the graph, you have the possibility to explicitly specify where the computations should be done, on the GPU or CPU. By default it will check if a GPU is available, and use that.
The Graph is run in a Session, where you specify what operations to execute in the run
-function. Data from outside may also be supplied to placeholders in the graph, so you can run it multiple times with different input. Furthermore, intermediate result (such as model weights) can be incrementally updated in variables, which will retain their values between runs.
This code example creates pairs of random matrices, clocks the multiplication of them depending on size and device placement.
You see that the GPU (a GTX 1080 in my case) is much faster than the CPU (Intel i7). Back-propagation is almost exclusively used today when training neural networks, and it can be stated as a number of matrix multiplications (backward and forward pass). That’s why using GPU:s are so important for quickly training deep-learning models.
CPU & GPU
CPU time in green and GPU time in blue. The initial GPU delay at the first iteration is perhaps due to TensorFlow setting starting up stuff.
This is awesome framework of deep learning.This is very use framework for deep learning’s project.I was used it for some deep learning project in Kaggle.If you want to see this you can go in this Link.
You can tell me what you think about this, if you enjoy writing, click on the clap 👏 button.
If you want to know more things about Machine Learning,Data Science or Deep learning.so start follow our Publication Hacker Noon .
Thanks to everyone.