There is arguably nothing more exciting happening right now in software development than all the new research, and advances in Artificial Intelligence, terms like Machine learning, Deep Learning and of course Artificial Neural Networks have become more common in the developers vernacular.
These are exciting times for AI and not without reasons, as the technology continues to advance at an unprecedented pace and becoming more embedded on our daily lives, with more novel applications reaching the news on almost a weekly basis.
In the last year, I got hooked into machine learning and AI; and I’m firm believer that AI will drastically impact how our relationship with technology in almost every single market and niche you can imagine.
One particular aspect of AI that has caught my attention is the development and application of Artificial Neural Networks or ANN. Personally, I find the concept of emulating the organic brain processes not only interesting but also quite promising; on top of that ANN is one of the technologies that has seen a lot of progress in the last few years.
However, we are nowhere near to reaching the point of democratization of this technology, that is to say that the barrier of entry is still fairly high — albeit consistently dropping. For many developers, and enthusiast this means that many aspects of AI are still far from reach.
If you are like me, you are a software developer with potentially a Computer Science background or maybe a self taught developer with years of experience as a professional developer; and yet when looking at AI programming like Neural Networks, you feel fairly out of your depth and with what seems an insurmountable knowledge gap ahead of you.
The objective of this series is twofold, first: Become a sort of learning notebook for my own learning into the basic and essential knowledges required to work with artificial neural networks and second: make my notes available and structure in a way that are available for any other developers that are faced the same challenges as I currently do.
The topics I have selected to focus on are the following:
- Part 1: Introduction
- Part 2: Components of a Neural Network
- Part 3: Topologies
- Part 4: Supervised Learning
- Part 5: Unsupervised Learning
- Part 6: Gradient Descent
- Part 7: Back-propagation
- Part 8: Building a Simple Neural Net
- Part 9: Adding Some Bias
Keep in mind that this series is by no means a comprehensive breakdown of each of the topics mentioned above, rather enough of a breakdown in a way that is useful and allows me and you to apply and work with said concepts when working with Neural Networks, so don’t expect a deep dive in math or theory; rather I’m keeping math and theory as light as possible — within reason.
Finally, as I approach this series from the perspective of a student and without the formal academic background there is plenty of room for inaccuracies and errors, so if you find one don’t hesitate to leave a comment or contact me directly.
This article was originally posted on my own site.