

My journey with machine learning started in high school. I was lucky and motivated enough to get my hands on old textbooks about Artificial Neural Networks. It was 2000. Friends were super popular on TV, Eminem was dropping hit after hit and there was I, an utter geek perplexed and literally blown away by object recognition systems, which at that time were scientific curiosities. My first paper, โSeeing and Recognizing Objects as Physical ProcessโโโPractical and Theoretical Use of Artificial Neural Networksโ, written at the age of 18, was my juvenile attempt at becoming scientist. It won me scholarships and entry to the best universities in UK (Cambridge) and the Netherlands (Groningen), and eventually unlocked academic carrier in computational biophysics of proteins. Finally, I was lucky enough to combine scientific expertise and a love affair with machine learning into an AI venture, Peptone.
However, my academic development path wasnโt rosy nor romantic at all. It was a mix of excitement and and getting my โgolden behindโ kicked by my mentors and academic supervisors, with the dominant contribution of the latter. The biggest pain of being an โacademic wunderkindโ was scientific writing. I was seriously repelled by it. I was absolutely convinced I was wasting my time, rather than doing more productive things in the lab.
Boy, was I wrong!
It is quite amusing to see how ideas have changed from the perspective of time and experience. Especially, when you reach a turning point in your carrier and start contributing to the field you admired as a geeky teenager. However, let me cut my hubristic autobiographical note short and jump straight to the problem.
Only few days ago, I have stumbled upon an article in MIT Review, which motivated me to write this short post.
Is AI Riding a One-Trick Pony? Are we making progress or simply circling around in an endless pool of NET(s), optimizes, architectures, normalization methods methods and image recognition approaches?
I am afraid, we are.
Let me share my take on this. Please bare in mind this is my private opinion, born out of numerous hours spent reading machine learning / AI papers, and trying to adopt their findings to the problems we are working with at Peptone, namely automated protein engineering.
Why all of the above are important? All of us need to be able to separate innovation from cyclically recurring patterns of work, which not only slows down the progress in the field of machine learning, but most importantly triggers bizarre levels of anxiety among the public, press, science and tech investors, eventually leading to headlines like this:
Please donโt get it the wrong way. I am absolutely not intending to mock nor spar with Elon Musk or Prof. Hawking, whom I have profound respect for. However, the fact is the bizarre and pseudo-apocalyptic press AI and Machine Learning are getting can be only compared to sensational and completely off the point articles about cancer, which is portrayed as a mythical and viscous creature (almost a sprout of Beelzebub himself) looking to annihilate human kind.
What can we do to improve the ongoing AI / ML research?
Read, read and once again read. If you think you have read enough, write and after that read again.
One of my academic teachers, Prof. Ben Ferninga of University of Groningen (who eventually got Nobel prize in 2016 for the discovery of organic nano-machines) told me and my fellow Uni. Groningen geeks, you have to be (I cite) โcautiously optimistic in your researchโ. Cautious optimism and stringent scientific reporting in machine learning and AI fields will lead to easier to asses, implement and regulate AI-driven automation. Eventually, society and press will see AI / ML wonโt replace jobs completely, but augment them; boosting productivity and extending well-deserved lunch breaks. Moreover stringent and scientifically objective reporting about the way machine learning methods work and are train to work, should eventually pave the way for more effective legislative routes.
My rant stops here. I strongly recommend the articles below this paragraph, which touch on the issues with AI-research, and the regulatory aspects of applied machine learning.
Please have your say using the comments section. Should the applied AI/ML advance, everybody interested needs to join the conversation. I absolutely donโt believe I am the only person having issues with the way AI/ML papers written and deposited in pre-print repositories.
Create your free account to unlock your custom reading experience.