Machine and Deep learning are one of the hottest fields in the recent years. We are witnessing tremendous achievements in almost any industry in the world thanks to the talented researchers who know how to harvest Machine learning in order to create amazing products.
Therefore, isn’t a surprise when we hear about so many people who want to enter this world, wishing to start learning the basic elements of those subjects. And what is the best way to learn about a new subject? books, that's right!
I have asked the member of “Machine & Deep Learning Israel” community to vote for best Machine & Deep Learning books and here are the results:
This new book, The Hundred-Page Machine Learning Book, was written by Andriy Burkov and became #1 best seller in the Machine learning category almost instantaneously. Andriy took such a complex topic and managed to write about it in a very clear and understandable way. This book will make an order in many topics you might be familiar with and even teach you new stuff about this field. Give it a try.
The abstract of the book:
Become a machine learning expert. Step up your career.
Today’s top companies undergo the most significant transformation since industrialization. Artificial Intelligence disrupts industries, the way we work, think, interact. Gartner predicts that by 2020 AI will create 2.3 million jobs, while eliminating 1.8 million. Machine Learning is what drives AI. Experts in this domain are rare, employers fight for the ML-skilled talent. With this book, you will learn how Machine Learning works. A hundred pages from now, you will be ready to build complex AI systems, pass an interview or start your own business.
All you need to know about Machine Learning in a hundred pages
Supervised and unsupervised learning, support vector machines, neural networks, ensemble methods, gradient descent, cluster analysis and dimensionality reduction, autoencoders and transfer learning, feature engineering and hyperparameter tuning! Math, intuition, illustrations, all in just a hundred pages!
The #1 book that got the most votes is “Understanding Machine Learning: From Theory to Algorithms” by Shai Shalev-Shwartz and Shai Ben-David. The book was first published in 2014 by Cambridge University aiming for students who want to learn the basics of Machine Learning and be familiar with all the important algorithms in this field.
The abstract of the book:
Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics, and engineering.
The second book is “Deep Learning” by Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Francis Bach (Editor). The book came out in 2016 and is considered one of the best books about Deep Learning. It took more than two and a half years to write this great book, which will explain you all the mathematics you need to deal with the Machine and Deep Learning algorithms later in the book.
The abstract of the book:
“Written by three experts in the field, Deep Learning is the only comprehensive book on the subject.”
―Elon Musk, cochair of OpenAI; cofounder and CEO of Tesla and SpaceX
Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning.
The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models.
Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.
This book, “ The Elements of Statistical Learning”, was one of the favourite books of our community members with many people recommending it. The book was published in 2001 by Trevor Hastie, Robert Tibshirani, and Jerome Friedman. Since 2001 it was updated several times and the last version was released in 2013.
The Abstract of the book:
This book describes the important ideas in a variety of fields such as medicine, biology, finance, and marketing in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of colour graphics. It is a valuable resource for statisticians and anyone interested in data mining in science or industry. The book’s coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting — -the first comprehensive treatment of this topic in any book.
This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression & path algorithms for the lasso, non-negative matrix factorisation, and spectral clustering. There is also a chapter on methods for “wide’’ data (p bigger than n), including multiple testing and false discovery rates.
The “Pattern Recognition and Machine Learning” book was written by Christopher M. Bishop in 2006 and have helped may students to learn the art of Machine Learning. The important advantage this book has on the others is the vast test and questions you have at the end of the book which could help you practice and improve your Machine Learning skills.
The Abstract of the book:
This is the first textbook on pattern recognition to present the Bayesian viewpoint. The book presents approximate inference algorithms that permit fast approximate answers in situations where exact answers are not feasible. It uses graphical models to describe probability distributions when no other books apply graphical models to machine learning. No previous knowledge of pattern recognition or machine learning concepts is assumed. Familiarity with multivariate calculus and basic linear algebra is required, and some experience in the use of probabilities would be helpful though not essential as the book includes a self-contained introduction to basic probability theory.
Last but not least, is “Pattern classification” which was written by Richard O. Duda, Peter E. Hart, and David G. Stork in 1973 (it’s not a mistake). Later on, we got the 2nd Edition from Duda which has more updated information.