In an era teeming with technological advancements, the fears surrounding artificial intelligence (AI) have become dramatized, to say the least. The widespread concern that AI will displace tech workers, among many others, seems to be the prevailing narrative. Do not let the media’s obsession with doom and gloom get the best of you. Before I dive into why I stand firmly against this narrative, let’s see what the tech workers are up against.
LLMs like ChatGPT are capable of some incredible things, including text generation, text summarization, getting recipes based on a set of ingredients, generating or troubleshooting code, generating comic strips, and more. If you talk to any programmer, the majority will make bold claims that GPT-like models are 10x their productivity. I am part of the majority here, in complete awe of its capabilities.
On a recent project, I was able to walk ChatGPT through a role-based prompt situation to get the model to output a nearly perfect sequence of Python code I needed to build a network chart for work. This was MUCH quicker than reading the Python package’s documentation and looking at Stackoverflow. If most agree that coding efficiency increases with LLMs, then surely we won’t need as many tech workers, right?
Jevons paradox refers to the efficiency dilemma where the improvement in the efficiency of a resource will increase the demand for it. It is often applied to energy where more efficient energy usage leads to cheaper energy prices which increases demand for it. If programming becomes easier, the salaries for some technical roles will likely decrease. With cheaper salaries, employers can afford to hire more of them to build and maintain digital systems. After all, the demand for building and maintaining digital systems is not going away.
For every digital system a company builds, they are increasing their reliance on technical skills and knowledge. While generating and troubleshooting code is getting easier, the complexity of organizations’ systems and processes is increasing exponentially. You have legacy systems that need to be maintained, online and offline dependencies, cloud vs. on-premises environments, different types of APIs depending on the use case, data lakes and data warehouses, real-time and batch data pipelines, security concerns everywhere, and more. It is no coincidence that technical job titles are commandeering “engineering” and “architecture” in their names. They are building and maintaining the digital version of a company’s architecture. These responsibilities span much wider than simply writing and maintaining code.
Technical roles are constantly changing over time because of the development of new technologies. With new technologies and automation, coding is often abstracted away through a no/low code interface. The breadth of technical subjects has created specializations in each domain and opened the door to “technical translators” between business and tech folks. Technical job titles and requirements are constantly evolving over time as coding has become a smaller slice of the average tech worker’s responsibilities. Some technical skills and knowledge outside of generating/maintaining code include:
Almost every day, it seems like there is a new generative AI model, an interesting use case that someone thought of, or a new application developed on top of these models. The pace of AI advancement in this current environment is staggering. For those in tech, it is impossible to keep up with all of the interesting technologies being developed in this space. While the rapid advancement seems limitless, the historical perspective would say otherwise.
Advancements in artificial intelligence are known to go through cycles of progress and stagnation. Since 1950, there have been three periods of progress and two periods of stagnation.
The latest advancements were facilitated by the cloud and advancements in chips (e.g. GPUs). While AI development seems unstoppable today, there is likely another AI winter in our future.
Large organizations are famously slow when it comes to tech development. A perfect example of how long it can take for large organizations to adopt new technologies is the cloud. Amazon first launched Amazon Web Services (AWS) in 2006, Google launched Google Cloud Platform (GCP) in 2008, and Microsoft launched Azure in 2010. Over a decade has passed since these cloud platforms were established, but “moving to the cloud” is still a high priority for many organizations. Many organizations still have some systems utilizing outdated technology, such as Microsoft Access. One thing we know for sure is that integrating LLMs into web applications within large organizations will not happen overnight.
LLMs are capable of some incredible applications; however, I’d wager that accuracy will always be a HUGE limitation because these models are a black box. Experts have some understanding of how LLMs work, but there will never be a straightforward, clear answer as to how and why they produce any outputs. For example, if you upload a spreadsheet of data into an LLM model, it could produce some incredible exploratory data analysis outputs (e.g. analysis, conclusions, visualizations). However, the analysis, conclusions, and visualizations might have errors, incorrect conclusions, and wrong assumptions. If the accuracy is questionable, then the next request will be for someone to validate the LLM’s output. Who do you think will do this quality check? A tech professional.
Another limitation of LLMs is the person using it. The outputs from the LLM are only as good as the questions it is asked. If you are not a data professional, will you know how to ask questions and interpret the answers regarding the following? I highly doubt it.
The usefulness of the LLM output is highly dependent on the user’s specialized knowledge in the topic.
ChatGPT has ushered in the next AI hype cycle with some impressive abilities that appear indistinguishable from magic. Venture capital and the media have picked up on this wave and will only accelerate its adoption. While the new AI developments are impressive, I would advise aspiring or current technical workers to embrace LLMs rather than fear them. They are more likely another tool in the toolbox for technical workers rather than a threat of obsolescence.
~ The Data Generalist
Image source: Stability AI
Also published here.