the developer philosopher © http://feaforall.com/wp-content/uploads/2013/04/1.jpg You can read a lot of things about the evolution of computer science although it is a relatively young recognized science. The most controversial topics tend to be about general paradigms of thinking: object-oriented vs functional programming, declarative vs imperative programming, RISC vs CISC, SQL vs NoSQL, etc. Of the classic debates, most are settled or irrelevant. However, in this brief article I would like to show that . the main trend in computer science is a transition from static to dynamic things WTF are you talking about ? In general, means , while means or . Here are a couple of examples, applied to computer science, showing that a lot of things have not been as dynamic as today in the past. dynamic capable of change static stationary fixed A punched card, i.e. a “static” program On the first large scale electronic computer called ENIAC, operations were performed by reading encoded instructions into . It has been inspired by wooden punch cards used to automate fabric weaves. You could not update your program on-the-fly if you have found a bug or have wanted to optimize it. a pattern of holes punched into a paper card Static Web 1.0 vs. Dynamic Web 2.0 © https://www.pinterest.fr/pin/437834394993131730/ According to Tim Berners-Lee himself the Web 1.0 could be considered as the “ ”. In other words, the early web only allowed users to search for information and read it. The lack of active interaction lead to the birth of Web 2.0, the “ ” web. Now even a non-technical user has the ability to contribute content and interact with other web users. read-only web read-write © https://www.talend.com/blog/2017/06/26/what-everyone-should-know-about-machine-learning/ Ordinary algorithms take input and produce output based rules and parameters. Machine Learning (ML) algorithms take data to rules and their parameters. While ML (and more specifically Deep Learning) produces black-boxes virtually impossible for humans to interpret, the rules-based system is easier to understand and will work correctly if you know under which decisions can be made in advance, but the scope of application is far less general. hard-coded dynamically generate dynamically adjust all the situations Applications have been linked to a target machine for a long time. Plugin-based architectures are now common and allow features to be added on-demand at run-time. Cloud-computing and SaaS provided users with the ability to dynamically load, use and synchronize their applications on different devices. In addition, high availability provided users with dynamic load-balancing, data replication and resource scaling. These are a couple of examples but if you think about it you will easily find more things that moved from static to dynamic behaviors. Why it’s a trend that’s here to stay Given the dynamic world we live in, especially when it involves human behaviors, you might expect computer science products to embrace this paradigm as a more common occurrence than not. , unless forced by external constraints like processing power. But there are others reasons why the dynamic trend is here to stay. In a nutshell, the world around us is dynamic, so computer science is too On the one hand, static things often to be changed because their implementation details are not accessible to end-users. They are information intrinsically integrated, not external data that can be manipulated in a friendly way. Like what are reflexes versus mental models. Dynamic configuration provides end-users with in the environment. require expertise autonomy to manually accommodate changing conditions On the other hand, dynamic models are designed to in the environment. Indeed, most of the time, patterns would change with time, and ideally models should self-adjust periodically. For instance today’s weakness of most machine learning models is their inability to adapt to change. When the target properties, which the model is trying to predict, change over time in unforeseen ways, predictions become less accurate as time passes. As a consequence, major users of machine learning models like Google are currently trying to set up pipelines that reliably ingests training datasets and generates models as output continuously. morph to automatically accommodate changing conditions Last but not least, the Moore law states the constant exponential evolution of processing power, so that you can do more at the same cost as time goes by. The static paradigm leaded the race for decades because computing resources where scarce, thus tooling for software development limited, but that is no longer true. And software is eating the world, even hardware has became programmable. More and more major businesses and industries are being run on software and delivered as online services — from movies to agriculture to national defense. According to the general static vs. dynamic trend you can also put in perspective more specific trends. That’s why the web is eating everything, because web sites have became dynamic in so many aspects. That’s why Javascript is eating the web development, because it allows to use all programming styles in a dynamic fashion. That’s why AI is eating automation, because processing pipelines are now dynamically adjusted. That’s why server-less will eat infrastructure, because processing resources will be dynamically provisioned. If you liked this article, hit the applause button below, share with your audience, follow me on Medium or read more for insights about: the raise and next step of Artificial Intelligence, the goodness of unlearning and why software development should help you in life ?