Index to “Interviews with ML Heroes”
Today I’m honoured to be talking to one of the great contributors to Kaggle Noobs community: Mamy André-Ratsimbazafy.
Mamy is currently working as a Deep Learning freelance consultant and Blockchain dev.
In a “previous life”, he has:
About the Series:
I have very recently started making some progress with my Self-Taught Machine Learning Journey. But to be honest, it wouldn’t be possible at all without the amazing community online and the great people that have helped me.
In this Series of Blog Posts, I talk with People that have really inspired me and whom I look up to as my role-models.
The motivation behind doing this is, you might see some patterns and hopefully you’d be able to learn from the amazing people that I have had the chance of learning from.
**Sanyam Bhutani:** Hello Mamy, Thank you for taking the time to do this.
Mamy André-Ratsimbazafy: Hello Sanyam, the pleasure is mine.
**Sanyam Bhutani:** You’ve had a very interesting path in ML. You picked up programming very recently and today are working as an established Freelancer.
How did you get interested in Machine Learning and even programming at first?
Mamy André-Ratsimbazafy: To be honest, I was bored at the end of 2016 as I like to spent my free time on time-consuming hobbies and ran out of those so I set for a new one. Here is what I think is the full story:
As a kid those were reading and playing games, I suppose even then in Baldur’s Gate 2 (a role-playing game from 1998) I was building some kind of “AI” to cast healing and contingency spells when appropriate.
In 2004 I became fascinated for about 8 years with the game of Go starting and culminating with my participation in the French national championship in 2012. I followed closely the first breakthroughs in go artificial intelligence with Monte-Carlo Tree Search (MoGo from Sylvain Gelly et al, 2006 and Crazy Stone from Rémi Coulom, 2006). I remember watching official 9x9 matches against invited pro players at Paris tournament with a lot of pressure on both sides. Note that Sylvain Gelly was often authoring papers with a certain David Silver of DeepMind fame.
In 2009, I started my dive into the technical aspect of computing, I try to learn Linux by installing not the most popular one but Gentoo, a distribution where you need to build everything yourself from scratch.
In 2010, I wanted to learn programming beyond bash, Excel and .bat script, so I looked for a language and choose not the most popular one but the most strange one: Haskell, a language with neither for-loop nor if-else-then branches (see a theme?). I started learning it with Project Euler a website that offers math-themed exercises to solve in any programming language.
I think that was also the year when I heard Kaggle though I dismissed it at the time, it sounded like I needed a super powerful machine. That’s also when I heard about Bitcoin which sounded super interesting though I dismissed as well due to lack of time (alas!).
In 2015, I wanted to write my own go playing bot and why not learn a new programming language at the same time, so as usual, I choose the new unknown kid on the block with the better speed prospect: Rust. I got a baseline bot to work after 3 months, did a lot of research but I spent too much time learning and fighting Rust instead of reinforcement learning and after I was too tired to improve upon it.
At the end of 2016, I wanted to learn something new again, remembered Kaggle and like many Kagglers, my machine learning journey started with the Titanic. I bought an Nvidia GPU in January 2017, The Data Science Bowl 2017 (DSB), 160GB of 3D lung CT scans and $1M for prizes was announced the day after. I spent the weekend on digit recognition and quickly jumped into the DSB (and stopped because preprocessing 160GB of 3D images was so slow and I couldn’t beat the baseline).
Following that, I wanted to understand the low-level details and solve the bottlenecks and implemented Karpathy’s Hacker’s Guide to Neural Network in another language that caught my eye, Nim, a language as fast as C with the syntax of Python which I thought had a lot of potential for data science. Afterwards, I continued and am still continuing my Nim and Data Science journey by implementing Numpy + Scikit-Learn + PyTorch/Tensorflow/Keras functionality in my own library wrote from the ground up. I still start Kaggle competitions from time to time to make sure I don’t lose touch with actual data science needs though I have trouble dedicating more than 2 weeks to those as it’s competing with my library development time.
**Sanyam Bhutani:** You’re currently working as a Deep Learning consultant and Blockchain dev, Can you tell us more about the projects that you work on?
Mamy André-Ratsimbazafy: Currently I’m working at Status, a blockchain startup that provides an encrypted chat, decentralized Ethereum applications browser and secures mobile Wallet. One way to describe blockchain is the revolution of trust, it allows two parties with differing interests to trust each other without a third-party enforcer. While in western countries third parties are usually reliable (if costly) this is not the case in Africa for example. Furthermore, a mobile phone is often the only portal to the internet in these locations hence Status mobile focus for a global reach.
In Status, I’m doing research and development on the future Ethereum 2.0 in close collaboration with the Ethereum Foundation and several other teams. This involves cryptography, robust testing, virtual machine implementation, networking, game theory, psychology and being certain that people will try to exploit flaws in your code.
I’m also available for short data science interventions from strategic advice to practical projects, courses, conferences, hardware setup, how to hire data scientists and what to look for.
I’m still continuing building my machine learning ecosystem with an aim to bridge the gap between research and production.
Sanyam Bhutani: You’re also an active Contributor to Open Source.
Could you maybe tell us a bit about your open source projects?
Mamy André-Ratsimbazafy: I’m actually one of the lucky few to be paid to work on open-source projects so everything I do on Ethereum and Status is open-source. Besides, all my Kaggle projects are open-sourced after the competitions and as I said before I’m working on Arraymancer, a data science library written from scratch in Nim. In December 2018 and after 18 months I’ve released v0.5 which brought very exciting features notably: Numpy .npy files support, HDF5 support, recurrent neural networks with text generation example on Shakespeare and Jane Austen work.
**Sanyam Bhutani:** I’d love to know more about your library.
What is your vision with it, why pick Nim?
Mamy André-Ratsimbazafy: I have several goals with it. One, I’d like to do a Kaggle competition with it, it would probably be a deep learning competition because data frames are harder than neural networks to write from scratch. Second, I would like to reproduce AlphaGo paper with it. Third I’d like to write a Starcraft II AI in it and participate in Blizzard and DeepMind AI challenges.
Regarding Nim, it’s a very flexible language, it has a syntax similar to Python, you can implement your own operators (instead of lobbying to get the ugly @ matrix multiply operator as in Python). As it compiles to C or C++, it also benefits from decades of compiler optimizations, GPU interoperability is easy and you can write libraries that can be linked to in any language. Also, it compiles quite fast, a couple seconds at most (unlike C++ projects like PyTorch or Tensorflow), which gives it a scripting feel. Furthermore, as it’s a compiled language, users don’t need to deal with Python 2.6, 2.7, 3.5, 3.6 and pip/conda and all dependencies, you can just ship a single file for deployment.
Sanyam: What kind of challenges do you look for today?
Mamy André-Ratsimbazafy: I have several short-term challenges for 2019. One provides the fastest primitives for my library. When implementing Arraymancer, I notice several bottlenecks on CPU that I researched, I’m confident I can provide substantial improvements over state-of-the-art libraries like PyTorch and Tensorflow even taking Intel MKL-DNN into account. For example, the default exp and log function provided in C can be improved 10x and those are bottlenecks in activation and loss functions (sigmoid, softmax) especially for large activations like in language models or reinforcement learning with huge action space.
Second, creating data visualization for my library. I have been impressed by the Vega project especially the exploratory data analysis potential of Voyager.
Third, developing reinforcement learning building blocks. While when I started Arraymancer, plenty of design were explored for both ndarrays and deep learning, reinforcement learning is pretty much a blank slate. I’ve already started by wrapping the Arcade Learning Environment for training agents on Atari games.
**Sanyam Bhutani:** I want to go back a little bit and ask about your background.
With almost no coding background, you quickly picked up Machine Learning and became well-established in the field.
Could you tell us more about your journey?
The common belief is that one needs dozens of years of coding experience to even get started with ML.
Mamy André-Ratsimbazafy: I basically learned to code beyond “exercises” while doing ML. Like many, I often had to look into project documentation (especially Pandas) and Stack Overflow but in any case, what helped me is just knowing how variables work, what is a for loop, and if-then branch and a function. What we use in ML is in any case quite specific to numerical computing so what you learn on a web project, for example, won’t really help you beyond the basic programming construct and, very important, versioning your experiments. Now, being experimented will help you a lot when doing a Kaggle competition and your project starts getting huge to avoid you and your teammates getting lost in the code.
**Sanyam Bhutani:** What are your thoughts about most of the Job postings requiring a Masters or Ph.D. level of expertise for ML?
Having a “non-traditional” background, how can one find work in this field?
Mamy André-Ratsimbazafy: I think most recruiters and companies are not mature enough to evaluate candidates. Many are still building up their teams and don’t have the in-house expertise to look for external signs of competence for recruiting. I’m much more concerned about the experience requirements. The rise of deep learning was in 2012 with AlexNet. Besides the US, I would guess that many data science focused masters were created around 2014 in universities so most new hires with an actual data science degree would have at most about 3 years of experience. Most more experienced people would probably be self-taught.
As I said at the start, companies are looking for signs of competence, the best way is to have a portfolio to show. Kaggle competitions and personal machine learning projects are an excellent way regarding that. Pick your favorite sport/movie genre/games/food, find a dataset and analyze it. In the interview show how it relates to the problems of the company you are interviewing for. You will be in known land and would have worked on it for hours already.
Another important thing is networking. Before applying, try to talk to people working in the same role in several companies. For example, data scientist is a bit of a kitchen sink role, with every company having a different scope for them. How to talk to them? Just say that you’re quite interested in their day-to-day, their challenges, and responsibilities and offer them to meet over a coffee. LinkedIn is quite useful for that.
**Sanyam Bhutani:** For the readers and noobs like me who want to become better practitioners, what would be your best advice?
Mamy André-Ratsimbazafy: Don’t spend too much time in theory and courses without applying your newfound skills practically on real datasets. As one might say “no plans survive the first contact with the enemy”, we can also say “no theory survives the first contact with reality”. You need to own and internalize those skills and the only way to do that is through using them.
**Sanyam Bhutani:** Given the explosive growth rate of ML, How do you stay updated with the recent developments?
Mamy André-Ratsimbazafy: I’m not trying anymore :P.
**Sanyam Bhutani:** What developments in the field do you find to be the most exciting?
Mamy André-Ratsimbazafy: BERT in natural language processing is game breaking, in general, the domain adaptation field is quite exciting. I’m also following closely the reinforcement learning advances and just bought the new edition of the Barto and Sutton book. For a less known avenue of research, I’m quite interested in Bayesian Deep Learning as a way to represent uncertainty.
**Sanyam Bhutani:** What are your thoughts about Machine Learning as a field, do think its Overhyped?
Mamy André-Ratsimbazafy: Blockchain is more overhyped ;).
Machine Learning is a very interesting field both from practical and research point of view. I’ve always been a “field butterfly”: I like math, physics, biology, economics, marketing, psychology, computer science … On the practical side, ML allows you to apply your skills to plenty of fields: wildlife and nature identification, cancer prevention, music recommendation, traffic prediction (website or urban traffic), energy consumption prediction, sports … so you hopefully won’t get bored.
On the research point of view, you have many things you can research and get inspired by computer science and optimisation, operation research, statistics and probabilistic programming, psychology, economics, decision science and game theory for reinforcement learning, human and animal perception for computer vision, sound, linguistics for natural language processing and plenty of other fields I’m missing.
**Sanyam Bhutani:** Before we conclude, any tips for the beginners who aspire to work in this domain but feel completely overwhelmed to even start competing?
Mamy André-Ratsimbazafy: Competition anxiety is a thing, having a passion and a drive to do well but fearing to fail your own expectations might prevent you from starting in the first place. Some might say that it’s a way to have an excuse “I would have done well if I participated but I couldn’t for XY reason”.
I didn’t have this issue for ML but had to deal with that a lot while playing Go at a competitive level and I saw that discussed in other competitive games.
The first step is to realize that you are indeed anxious about doing something that someone (probably you) can judge you on.
The second step is: does it motivate you (good) or stifle you (bad). If it stifles you, you need to remember that 2 years later once you’re a master or a grandmaster no one will look into your history to judge you on your first early steps. This is like learning how to walk, or learning a new language or being a beginner in a sport, you will make plenty of mistakes. If someone ever mocks you for your early beginner mistakes, well you just have a low-cost predictor of toxic personalities and environments you don’t want to be around.
One thing that helped me a lot when I began playing Go is having a rival, it’s much easier to keep yourself motivated when you are not alone and can banter and discuss what you just tried or learned in the past week with a friend of the same level.
**Sanyam Bhutani:** Thank you so much for doing this interview.
If you found this interesting and would like to be a part of My Learning Path, you can find me on Twitter here.
Kaggle Noobs is the best community for kaggle where you can find Mamy, Kaggle Grandmasters, Masters, Experts and it’s a community where even noobs like me are welcome.
Come join if you’re interested in ML/DL/Kaggle.
If you’re interested in reading about Deep Learning and Computer Vision news, you can checkout my newsletter here.