A picture showing a nurse handing a scalpel to the doctor from the point of view of the patient lying on the operating table. Both are looking at the scalpel. Worried.
Listen to the audio version!
There's a serious problem in this industry.
When we try to do something new it’s ok to struggle, even if it’s inside an industry we’re familiar with.
That's expected.
There’s always a ramp-up for every new complex ability. Be it a technology or a new concept.
Instead of embracing new concepts and programming languages, though, we tend to blame the tools or the syntax. It can be either the language or the framework that creates messy code or a concept that didn’t work and failed to meet the expectations.
When struggling with something, we tend to blame the tools or the syntax.
Sorry. The problem is not the tool.
Most of the time the language didn’t change and the developer is the one who evolved or the concept was applied incorrectly and failed to expose its benefits in practice.
Let's imagine a doctor who made a heart surgery that went wrong. The doctor can’t complain that it went wrong because of an issue with the medical scalpel. Even if there was a problem with the scalpel and the patient died because of that, the humans are the ones accountable because they are responsible to choose what they use, not the tool. Why should programming be different?
The graph below is an excerpt from the book The 4-hour Chef:
The bipolar learning graph for a language learning process from the book The 4-hour Chef by Timothy Ferriss. Here the author creates a graph with lines comparing the level of "confidence in conversational speed" (up) with the time in months a person might take to achieve that (right). The line starts going up immediately on the first month, representing the first 40 sentences where there’s a high level of confidence. The second point from 1 to 2 months shows a huge decline in confidence which is when the improvement begins. At the bottom, the person starts using native materials versus textbooks, it’s the lowest level of confidence. Between the month 2 and 3, there’s an increase in confidence but that still doesn't reach the initial bump, it's when the person starts incorporating more complex grammar. From the month 3 to 6 there's a plateau where nothing changes, and after that, it starts to go up consistently and fast on what the author calls the "Inflection point". When the month 8 comes, the level of confidence is higher than the initial bump of the first month and the person reaches what the author calls the "Fluency".
The bipolar learning curve is an observation where, in the original book, the author used the time span on a scale of months. But the curve can also be applied at different scales.
The journey of learning as a developer might be different for each person. It's very hard to test empirically how it works given the huge amount of variable there is. In my case, I started programming with absolutely no background and that can be a good experience to tell.
Let's assume a developer with no background whatsoever that starts programming by curiosity. The speed can vary according to the place the developer lives and other aspects of their personal life. But for the purpose of this post, let's use the time scale of years and call each of them as a "milestone".
In year 1, everything is new. The developer doesn’t understand exactly why things work and they don’t really care that much. They only know that if they follow a step-by-step tutorial they can get something working. The idea is to Google everything and to try to "fit" the code into the existing project.
The year 1 milestone heavily relies on copy/paste and Cargo Cult Programming. The occurrence of bugs is high; the code quickly becomes unmaintainable, and performance goes down as there’s no online recipe to follow for a custom edge case that eventually shows up.
In year 1, the developer does copy/paste code and doesn’t really care for what they’re doing. They just want something done.
In year 2, the developer understands the syntax and what most of the copy/pasted code does. There's still some magic happening and some Cargo Cult, albeit at a very different level than year 1.
The occurrence of bugs is still high; the developer starts to spot bad code from one year ago, only to realize he or she was the one who wrote it (which is a positive signal — it means they learned something); and performance still goes down as complexity keeps increasing, albeit at a lower rate than before.
In year 2, the developer understands most of the syntax and starts caring more about maintainability
From years 3 to 5, the developer understands the point of underlying primitive data structures. If they use Java, for example, they start understanding the purpose of Collection
, List
, and Array
but start using those structures for everything. When they learn a new concept or technique, they also try to apply it everywhere.
This is the phase of “When you have a hammer, everything looks like nails”.
There’s not much copy/pasted code anymore but Cargo Cult still happens when applying a concept (such as Clean Code or SOLID). Most of the time the concept is applied in the wrong way, like applying Test-First instead of TDD.
They might understand the need for testing the code but don't necessarily apply it or understand the "why". They try to do it but find it extremely counter-productive. They eventually decide to make small tests that cover a small portion of the application, because if they try to cover more than that, they will find themselves trapped in a bunch of unmaintainable test code which will contain a lot of false positives, false negatives and side-effects.
From years 3 to 5, the developer starts to test the code and they learn there are concepts which allow complexity to be partly tamed
From years 5 to 7, the developer starts understanding better the point of fundamentals. They also start applying some concepts correctly, like TDD instead of Test-First. The syntax is not an issue anymore, they've reached enough fluency in their favorite languages and can learn a new one quickly. Cargo Cult and copy/paste rarely happen.
They can create big test suites without becoming too complex, although they still might have trouble to make it extensible and easy to change. Mocks and Stubs are still being used excessively and in the wrong scenarios. Coverage is not 100%.
From years 5 to 7, the developer starts applying things with some judgment, syntax is not an issue anymore and they can build test coverage without going down the rabbit hole
From year 7 and onwards, the developer starts to understand better the fundamentals of language-agnostic concepts. In my case, and from other developers I know, it was Event-Sourcing and the architectural ideas of Event-Driven Design, Domain-Driven Design, CQRS and (Roy Fielding’s) REST. They might have discovered by themselves or learned the agile principles over time and started learning about other areas like Math, Physics, Philosophy and Psychology in order to innovate on top of software development.
They also start learning other programming paradigms and measuring the tradeoffs of each technology or concept for the task at hand. Nothing is a silver bullet anymore and they understand discussions are necessary to reach the right answer.
The "year 7 and onwards" milestone contain the aspects most big companies look for in an experienced software developer
The journey sounds really slow. And it should!
There are still many people who believe any boot camp or online course can give them instantaneous programming skill. But a programming career doesn't work like that.
In the case of millennials, Simon Sinek points out they are used to have the mindset of instant gratification because of the things we have in the digital era: Instead of waiting for a TV show, it's just a matter of downloading it; instead of waiting to buy something, just go on Amazon and it arrives the next day; and instead of learning how to go for a date, swipe right, and that's it!
This might encourage the mindset that everything is easy and increase the rate of depression once people find out that the reality is not.
A video of Simon Sinek, talking about millennials in the workplace. The video is linked in the part that he talks about the instant gratification provided by technology. I recommend watching the whole video, though, it’s very enlightening.
Now you might think 7–10 "real years" of experience is enough to reach to the last state. The state required to work in most big tech companies.
Unfortunately, "years of experience" doesn't matter. It's not an efficient measurable unit of skill and I used here just as an abstract measurement.
If we observe the bipolar learning curve described above, it’s totally possible for someone to get stuck on the first milestone with high confidence and a low ability for a long time. It’s just a matter of doing the same thing every day and never evolving or learning new things (if I’m confident I can do it, why should I change?).
There’s no value in having 20 years of experience of the same year!
Many of those who cross the first milestone will see their confidence going down as quickly as it was built up. Some will develop Impostor Syndrome and some will just give up.
I believe that’s also why we can see an immense rate of drop offs in some universities. The progress is too slow compared to the instant gratification from the digital era, which creates the illusion that programming is unattainable.
One can argue most developers in the industry today have less than 6 years of experience or have not reached to the equivalent of the "year 7 and onwards" milestone. What this means is there are millions of developers who still don’t understand the fundamentals of software engineering, necessary to create robust, stable and maintainable applications.
Most developers in the industry haven't reached the equivalent for the "7 years of experience" milestone
They are developers writing the firmware for your car; the navigation system of the plane you're traveling; the bank you use to access your account; the software of the robot who is doing your surgery, and the system your company uses to manage their customers.
Instead of encouraging more developers to join the industry, we need to start educating the existing ones about the importance of writing quality software so that they can act as useful mentors.
There are 5 things we can do as a community:
Fundamental knowledge is not to reinvent everything (like to use only pure JavaScript instead of Angular, for example). Fundamental knowledge is to understand why the language has the features it has, why the framework exists and why it’s a better/bad practice to use either one of them!
It's the skill necessary to be able to implement them yourself if you must.
Greg Young in his talk entitled "8 Lines of Code" tell us what simplicity really means. He uses 8 lines of code to conclude that "if you want to use a tool or framework to solve a problem, maybe what you need is to change the problem". Learning fundamentals allows us to not rely on “magic” and instead think about the problem in a fundamental way instead of outsourcing it.
Fundamental knowledge is not to reinvent but to understand better what there is, in order to avoid talking about how it works
I am trying to do something to change it, trying to write about fundamentals as much as possible for free. This is one of the reasons I started blogging instead of letting myself to become an Angry Programmer because of the state of this industry.
I hope the industry starts doing the same and building on top of things that are valuable for a professional, not things that will make yourself obsolete in less than one year.
This way we can build trust in society to be seen as the doctor.
A professional who can be accountable and respected for their decisions instead of being forced to kill the patient and blame the scalpel.
Don't blame the scalpel.
Thanks for reading. If you have some feedback, reach out to me on Twitter, Facebook or Github.