Code tests filter for a specific type of candidate; those with encyclopedic knowledge of syntax, algorithms, and computer science theory...or just those who are good at completing logic puzzles in time-critical environments.
Examined critically, preliminary code tests and whiteboard coding rarely assess real-world problem-solving, collaboration and system integration. Yet, they can dictate who gets hired.
The use case, for the kind of programmer that these tests identify, has been shrinking for some time now. The skills that define 'great' programmers, today, extend far beyond rote memorisation. Being able to apply context, bringing unique perspectives, understanding a bigger picture, and integrating components effectively are now, more than ever, far more valuable than simply grinding out code.
The emergence of AI interview assistants that can help you solve complex coding problems in real time underscore this issue nicely. See LeetCode Wizard, ParakeetAI and Ultracode.
These tools can help candidates solve complex coding problems in real time, effectively automating the skill set that coding interviews historically prize, sometimes with near-perfect results.
If AI can pass these tests; then what exactly are we evaluating in human candidates?
The ability to solve isolated code problems is becoming a cheap trick
In an era where AI can handle syntax and debugging, the value of a developer increasingly lies in the ability to think critically, adapt and apply technology in meaningful ways.
In recent history, tech has been exclusionary, in both perception and practise.
From perception of computers, to education and then surviving the culture. The field has long been built around a narrow definition of who belongs.
Code tests, taken alone, are likely to be exclusionary. While there is limited research into this concept itself. It is my experience, as a hiring manager in tech, that code tests tend to favour those who have had a traditional computer science background. Therein lies the bias. In America, the majority of university faculty are white, and male. Which almost certainly contributes to the fact that only 18% of CS graduates are women.
We also know, in the UK, 1 in 7 are considered neurodiverse. Neurodiversity is a wide spectrum including dyslexia, ADHD & Autism, and at its core is a difference in how the brain thinks. Assessing many neurodivergents via code test alone, would be to do them a great injustice. Many will often struggle with time-constrained, high-pressure coding tests, even if they possess strong real-world programming skills.
This is all backed up by the fact, that anonymous code tests do not remove bias. While theoretically, AI marking of code tests could be tweaked to eliminate bias, this would do nothing to address the systematic barriers that dictate who excels at this type of test in the first place.
Interestingly, these new AI assistants could level the playing field.
Candidates who might have been previously excluded for failing an isolated code test may now have a better shot. Either because;
While we are all taught cheating is not a good thing, do you expect people who are disproportionately discriminated against in the hiring process not to use these tools to their advantage? Especially when they have spent any length of time in industry, they will witness the kind of behaviour often required to rise to the top of the tree. And know that cheating in a code test doesn't come even close to that in terms of morality.
None of this, means that understanding code is becoming obsolete. Understanding algorithms, data structures, complex code and system architecture will remain crucial for making informed decisions. But people can have these abilities, and still fail a code test. A classic example being Max Howell, creator of the beloved dev tool homebrew. Who in 2015 tweeted...
Google: 90% of our engineers use the software you wrote (Homebrew), but you can’t invert a binary tree on a whiteboard so f* off
An experienced hiring manager, with relevant technical experience, should be able to gauge somebody's ability from less formal interviews, and walk through of similar code the candidate has written, and products that they have worked upon.
Further steps to assess a candidate could include
Although in all of these cases, you would still need to be mindful about how each could potentially discriminate against different subsets of the population.
While there cannot be an over reliance on AI, we are already seeing that encyclopedic knowledge is no longer the primary currency. Instead, the ability to leverage AI, connect ideas and solve problems in a nuanced, meaningful way is taking centre stage. Code tests alone will not help you assess this.
As large language models continue to advance, we will witness a fundamental shift in the profile of the “typical coder”. AI will handle the grunt work, while human developers focus on the strategy, creativity and execution; the elements that drive true innovation.
If the core skill set of a software developer moves toward applied knowledge rather than recall and traditional CS background, we could see a broader, more diverse group of technologists entering industry and shaping the future.
By breaking away from rigid, exclusionary hiring practices, we stand a greater chance of making software that works for everyone, not just the few.