“It would be pretty damn maddening if it turns out programmers are easier to automate than lawyers.” -Professor Alejandro Piad Morffis.
The increase in adoption of Large Language Generative AI models such as ChatGPT, Microsoft Bing, Google Bard, Stable Diffusion, etc., while the advantages of these models cannot be refuted, it has led to an exaggerated and harrowing, but not baseless, fear by members of the public on the possibility of these AI models jeopardizing job security for millions of workers worldwide.
As described earlier, the threat of AI to human jobs, while being exaggerated and harrowing, isn't baseless.
The ability of AI to perform repetitive tasks, process large amounts of information, and mimic human-like decision-making makes it a very good tool to enhance creativity, productivity, and efficiency.
To answer the question, “Will AI take our jobs?” I have enlisted the help of an expert by the name of
The questions will be pre-fixed with the "Q" letter while the answers will be pre-fixed with the "A" letter. With regards to the questions, I hope to cover technical and philosophical questions as Professor Morffis also has an affinity for the Philosophical.
It is also important to note that I will provide links to certain concepts that are complex to grasp, for the sake of understanding.
Let us Begin!
A: My name is Alejandro Piad, I majored in Computer Science at the School of Math and Computer Science at the University of Havana, Cuba. I did a Master's in Computer Science also at the same college in 2016 and earned a double Ph.D. in Computer Science at the University of Alicante and a Ph.D. in Math at the University of Havana in 2021.
My Ph.D. was in knowledge discovery from natural language, specifically focused on entity and relation extraction from medical text.
Since grad school, I've been teaching at the University of Havana, I've been the main lecturer in Programming, Compilers, and Algorithm Design, and also an occasional lecturer on Machine Learning and other subjects.
Since 2022, I'm a full-time professor there, I was also one of the founders of the new Data Science career, the first of its kind in Cuba, and I wrote the entire Programming and Computing Systems curriculum for that career, I keep doing research in NLP, right now focusing on neuro symbolic approaches to knowledge discovery, mixing LLMs with symbolic systems.
A: I played with AI for games as an undergrad student, and did a couple of student projects with computer vision and metaheuristics. After graduating, I started my master’s in Computer Graphics, but as a side project, I did some research in NLP, specifically on sentiment analysis on Twitter.
After finishing the master I started thinking about doing a Ph.D. and got all in with machine learning.
So you could say around 10 years since I started taking AI seriously. My oldest paper related to this stuff is around 2012.
A: Well it was always cool, just not outside academia. I'd say, the intersection of two
A: I don't know if any industry will be replaced entirely, but I'm sure there will be massive changes. In the long term, of course, no one can say anything. But in the short and mid-term (5-10 years), with what we're seeing with language models, my bet is that anyone whose job is predicated on the shallow filtering and processing of natural language will have some reckoning to do.
This includes all sorts of managerial roles, including anyone whose job is to read emails, summarize, and build reports. Any kind of secretary who doesn't go beyond note-taking and task scheduling. Copywriters who work with templated content.
Basically, any content creation task below the level of actual human creativity will be cheaper to automate than paying a human
Not because the model will give them the final quality they aim for, but because the model will give them 90% of the quality, and then the real human creativity comes as a cherry on top and adds the final 10%. Education has to change considerably, too. We can talk more about that if you want.
A: Yeah, academia will adapt. It is the longest-living institution in Western civilization. It predates all our mainstream religions, and it has survived all major civilization changes. It will change substantially, as it has changed across the ages.
A: All technology has potential issues, and the more advanced the tech the more pressing it is to consider them. AI is a very powerful technology with the potential to disrupt all our economic relationships.
It is something at the level of an industrial revolution, so it will have massive implications, and so the concern must be at the same level.
One thing that is different from previous disruptive tech is that mostly, new tech automates the jobs that require the least cognitive skills, it happened with agriculture, manufacturing, mining, etc.
However, this time, we are on the brink of replacing a large number of white-collar jobs while leaving lots of blue-collar jobs undisrupted. so we will have lots of people who are used to working in offices finding that an AI can do their job as well (or maybe slightly better) and much cheaper, so they will either have to upgrade their skills significantly or they will have to turn to less skilled jobs.
There are other ethical considerations, there is a lot of potential for misuse of AI technologies for misinformation, fake news, social disruption, etc. I don't think we are prepared for a massive number of human-like chatbots taking over Twitter; it is already starting to happen.
There are also bias issues, as these systems become more and more pervasive, the harms can be very focused on the minorities, so everyone will not reap the benefits of AI to the same degree, some minorities will get the downsides more strongly than those not from minorities.
A: Yeah, especially those jobs. It will automate more white-collar jobs than blue-collar jobs, at least in the near term. That's something new, and society isn't used to having to deal with that kind of job disruption.
These are folks that went to college and more or less got convinced their jobs were safe, or at least safer than taxi drivers, pizza boys, gardeners, you mention it.
A: In the very long term, all jobs will evolve in unpredictable ways, including software engineering and development. AI and technological advancements will transform these professions to the point where they may seem to have disappeared.
However, in the short to midterm, a decrease in software engineers is unlikely due to the increasing demand for software across various industries. This growing need for skilled professionals far surpasses the current number of trained individuals capable of building software.
The AI revolution will follow a similar pattern as previous technological breakthroughs in computer science such as compilers, integrated development environments, cloud computing, containers, code completion, and IntelliSense.
These innovations made programming more accessible for those without highly formal backgrounds and expanded opportunities for developers.
Over the next 20 years, we can expect an explosion of people entering the field of software development. Although job roles may change somewhat with evolving technology trends, there will likely be continued growth for those interested in learning how to program and write code.
A: Look at the numbers. All I'm seeing are more job ads for software developers. The trend is still climbing.
A: First, I have no idea how you would wrap your head around what a 30% chance of automating a 50% of jobs even looks like. Is it a 15% expectation of losing your job?
A: Yeah but the thing is, many of the tasks developers spend most of their time on are pretty low value and we would be much better off if they were automated: debugging, writing tests, doing pesky code optimizations. As you automate all of that, we'll have more time for the really important parts of software development, which was never about writing code.
A: High level and architecture design, user experience, human-computer interaction, and that's just about the software itself. Software engineering is really about the relationship between software and people, both people that make software, and people that use software. So software skills are only half of the story. Understanding your users and colleagues is the other half.
A: Very hard to say, of course, we're in the middle of an industrial revolution as big as at least the microprocessor revolution or the internet revolution, no one in 1960 could imagine what 1980 would look like.
Society is never ready for change, by definition. That's what a system is, something that strives to maintain its status quo. But humans are the most adaptable social species out there, so I think we'll manage. Lots of people will suffer, and that's something we have to work on, definitely, but nothing apocalyptic in my opinion will happen.
A: I still haven't seen any really compelling arguments for the doomsday scenario. Lots of the arguments seem to be predicated on reasoning like "we don't know how this is going to evolve so it will probably kill us all" and that's a classic logical fallacy: you're basically making an inference from lack of knowledge.
A: I think we will solve it, at least well enough to avoid apocalyptic scenarios. The most severe alignment issues require you to believe in a powerful version of the
A: I think that's only natural, as we automate more and more of the menial cognitive work (e.g., summarizing documents or finding relevant references) we humans will get to work on the most creative parts of our jobs. Some jobs have very little of that to begin with, and there I see a challenge because maybe those will be completely or almost completely automated away.
But most knowledge work has a creative side, the part where you actually do something novel.
As to which fields are ripe for this, I can't talk about much else but in education at least I think we're bound for a long-needed revolution.
We professors no longer need to be gatekeepers of information. Instead of spending most of our time grading the same essays over and over, we can now focus on giving the best possible personal feedback to each student.
A: There are a few easy ways and then some not that easy. First is just a matter of increasing access to knowledge. Now almost everything you want to learn, you can find relevant information on the internet at least to begin with, but it is often split around many sources with disparate levels of detail, contradictory stuff, different linguistic styles, etc.
The first relatively easy application is just here take this bunch of sources on some topic and give me a high-level overview of the main takeaways summarized, with links to dive deeper, etc. We are pretty close to that (baring the hallucinations which are a significant problem).
Another way is by simply freeing educators from menial tasks to give them more time to focus on creating learning experiences. But by far the most important thing I believe is the potential for personalized learning.
You could have an AI assistant and tell it "I want to learn how to make a rocket" and it could create a very detailed plan, especially for you, based on what it already knows that you know, it would tell you, here, first watch this video, now take this short course, now read this chapter of this book, ... And guide you for 3 months to learn something very specific.
A: Yeah definitely, machine learning is by definition trained on the majority, so it will always hurt the most those whose use case doesn't fit the majority for any reason.
In particular, whenever you train models to predict human behavior or interact with humans, it tends to work better for the subpopulations that are best represented in the data.
What can you do? Start by raising awareness of these issues and make sure to thoroughly test your models for bias. Be very careful about how you collect data, don't go the easy way and crawl the web, and make an effort to find good high quality and high-diversity sources of data.
But more than anything include diverse people with diverse points of view in your team. You can't solve a problem you can't see.
A: I'm hoping the open-source community will make the tools available to all. We already have seen what having access to a free operating system, a free office suite, a free game engine, a free code editor, etc., does for the creative kids of the poorer parts of the world.
I trust we will have open-source AI tools as good as commercial ones, the same way we have open-source dev tools as good as commercial ones.
A: If you are already studying computer science, the basic advice is to focus on fundamentals, not just tools. Tools will change but the fundamentals will remain relevant for a long time. If studying something else, learn how AI can improve your productivity, and learn a lot about its limitations. Use it to make your own work better.
A: The AI revolution is here. We can all either be a part of it, by learning to use this technology for making good and improving the lives of everyone.
Also published here