Index to “Interviews with ML Heroes”
Today, I’m super excited to be talking to Dr. Leslie Smith.
I’m sure Leslie needs no introduction to our friends from the fast.ai community. For our readers from outside of fast.ai:
Leslie is currently working as a Senior Research Scientist at the Naval Center for Applied Research in AI, United States Naval Research Laboratory.
His past research works include includes deep neural networks and reinforcement learning applied to robotics research. Prior to that, he has worked in the Maritime Surveillance Section.
He has a background in Chemistry, he has done his Ph.D. in the Quantum Chemistry domain.
His Research Objectives are to perform innovative scientific research and algorithm development in the areas of computer vision, machine learning, robotics, and sparse representations.
About the Series:
I have very recently started making some progress with my Self-Taught Machine Learning Journey. But to be honest, it wouldn’t be possible at all without the amazing community online and the great people that have helped me.
In this Series of Blog Posts, I talk with people that have really inspired me and whom I look up to as my role-models.
The motivation behind doing this is that you might see some patterns and hopefully, you’d be able to learn from the amazing people that I have had the chance to learn from.
**Sanyam Bhutani:** Hello Leslie, Thank you for taking the time to do this.
Dr. Leslie Smith: I am honored that you thought to invite me for this interview
**Sanyam Bhutani:** You’re currently performing scientific research and algorithm development in the areas of computer vision, machine learning, robotics, and sparse representations.
You’ve been working at the Naval Research Centre for more than 15 years now.
Could you tell the readers about how you got started with Machine Learning? What got you interested in Machine/Deep Learning at first?
Dr. Leslie Smith: For the first several years that I worked at the Naval Research Laboratory, I was working in computer vision and related areas. I took notice that neural networks won the 2012 ImageNet Challenge by a wide margin and Google’s “Cats” paper “Building high-level features using large scale unsupervised learning”. In 2013 I started trying a few things and found I was fascinated by the field. Over the next year, I shifted my focus to deep learning and I think most all the other researchers in computer vision did the same in the following couple of years.
Sanyam Bhutani: You had a background as a Business founder before switching to research.
What made you pick research as a career path?
Dr. Leslie Smith: Actually, I picked research as a career when I was 10 years old. I was interested in science and a family trip to the New York World’s fair in 1964 was fascinating to me. I picked to focus on Chemistry but in university, I found physics more interesting. So my Ph.D. is in Chemical Physics. Unfortunately, my postdoc at Princeton University didn’t meet my expectation so I left to work in the industry. After 8 years in the industry, I thought I could create my own business. It took me a decade to realize running a business was not right for me. I joke that I couldn’t sell $20 bills for $5. After some soul searching I realized that doing research is what I find most satisfying. That set me on the road to where I am. Now I really love what I do.
Sanyam Bhutani: Can you tell us more about what you’re currently working on?
Are you secretly building our Robot AI Overlords? :D
Dr. Leslie Smith: Of course. Isn’t everyone! :-)
I am working on lots of ideas, which is part of the fun. Fortunately, I am creative and I have many more ideas than I could possibly have time to work on them all. I joke that it is hard to get everything done in a 40 hour day! But I try. Some of the ideas that I am working on include being able to train networks without weight decay (one less hyper-parameter), online batch selection via importance labeling of data, combining novelty detection with few-shot learning, dynamic data augmentation, automatic data labeling, and label cleaning, and defending against adversarial examples. I can’t work on all of these in a single day so each day I pick one topic to focus on.
Sanyam Bhutani: You’ve been working as a researcher for quite a few years now. What has been your favorite project during these years?
Dr. Leslie Smith: It is always the next one. I get super excited by my ideas. It gives me the motivation to work hard at it.
Sanyam Bhutani: For the readers that are curious about what does a day in the life of a researcher look like, can you give us an insight?
How much time do you spend on Experimenting Vs Exploring new ideas?
Dr. Leslie Smith: I don’t know about other researchers but a majority of my time is spent on reading, experimenting, writing, email and talking with people. Reading and experimenting are the catalysts for most of my ideas.
Sanyam Bhutani: Could you tell us a bit about how do you decide to start a new experiment? What kinds of problems or questions pique your interest?
Dr. Leslie Smith: Well, if I am reading a paper and it leads to an idea, a big factor is if the authors made their code available. If so, I’ll download it and run it to replicate their experiments. Then I can quickly try out my own idea to see if it makes sense. Quick prototyping is important to me here. Also, their code provides a baseline.
Another important factor is my confidence in how likely the idea is to work. If I am confident, I am more motivated to try it than if not.
Sanyam Bhutani: Once you’ve finally found an idea that you want to explore. What are your go-to techniques? How do you approach a question or idea when getting started?
Dr. Leslie Smith: Start simple. As I said, code on GitHub is a good start and provides a baseline. For example, when I started with few-shot learning, the codes for prototypical networks and MAML were available for a start. Then I try tweaking everything, just to be sure my intuition about how I think it should work matches reality.
Sanyam Bhutani: A part of Research is to know when to put an end to the experiment or continue experimenting.
For our readers who might get disheartened if their idea isn’t working first or for those who might have been obsessing over some idea for a long while even when things aren’t working, Can you tell us how do you decide between these two?
Dr. Leslie Smith: When an idea is not working out, I must understand why. Is it a mistake I made in the code? Or was I wrong in my thinking. If I was wrong, I learn from this and update my intuition. In this way failed experiments are learning experiences, which are valuable.
Sanyam Bhutani: For the readers and the beginners who are dreaming of doing research in this domain, what would be your best advice?
Dr. Leslie Smith: Learn from the best and learn from everything. I admire Yoshua Bengio and read most of his papers — and there are lots of them. Also, don’t be afraid to fail or ask stupid questions, Everything is a learning experience. On the other hand, don’t make the same mistake twice. I write everything down in my lab notebook and I review it regularly, I will often take time to quietly think about items to see where my thoughts lead me.
Sanyam Bhutani: Many people are of the opinion that making significant contributions to the field requires one to be a post-grad or have research experience.
For the readers who want to take up Machine Learning as a Career path, do you feel having research experience is a necessity?
Dr. Leslie Smith: I believe deep learning is transitioning from mostly research to mostly engineering. There is a world of new potential applications that are yet to be created. There are new factors that will become important, such as verification, reproducibility, explainability, integrity, etc. Real world data is substantially different than the research benchmarks like MNIST and ImageNet. Hence research experience is not necessary to do many of these things.
Sanyam Bhutani: Another opinion that is a “mental barrier” to many is that doing “Machine Learning” or Machine Learning Research requires you to have a cluster of GPU servers and expensive hardware in order to make significant contributions.
What are your thoughts on this opinion?
Dr. Leslie Smith: As I was just saying, data is more important than hardware. Google researchers have access to thousands of GPUs and TPUs but I don’t. It doesn’t prevent me from doing what I can. I’d say my message is “find your own niche”. Do something no one else thought to do.
Sanyam Bhutani: Given the explosive growth rates in research, How do you stay up to date with the cutting edge?
Dr. Leslie Smith: I spend many, many hours every week keeping up to date but that is just me. On Mondays and Wednesdays, I look through all the new papers on arXiv.org to find ones that might be relevant. This has become a ritual for me. The ones that look relevant, I read the abstract and skim the paper. If it catches my interest, I print it. I spend most of my weekend reading these papers. This takes hours but I stay up to date and reading is a great source of intuition and new ideas.
Sanyam Bhutani: Do you feel Machine Learning has been overhyped?
Dr. Leslie Smith: Of course it is. I like to paraphrase the quote “All Machine Learning is wrong but sometimes it is useful”. It has proven useful in computer vision, machine translation, and speech recognition, to name a few. I recommend finding new ways to make it useful.
Sanyam Bhutani: I think the complete fastai community is grateful to you for your research on the “1 cycle learning policy”.
What are your thoughts about the fast.ai course and the community?
Dr. Leslie Smith: I hold Jeremy Howard in high regard. He saw the need for practical deep learning and generously provided a wonderful solution. The community that he created is vibrant and strong.
Sanyam Bhutani: Before we conclude, any tips for the beginners who are afraid to get started because of the idea that Deep Learning is an advanced field?
Dr. Leslie Smith: Beginnings are always the hardest part. As a scientist, I view life as a series of experiments. Make a deal with yourself to try it for a time that is long enough to know if it is right for you. If it is not, stop and go onto something else. Knowing it is an experiment and not a commitment makes it a lot easier to try things.
Sanyam Bhutani: Thank you so much for doing this interview.
If you found this interesting and would like to be a part of My Learning Path, you can find me on Twitter here.
If you’re interested in reading about Deep Learning and Computer Vision news, you can check out my newsletter here.