paint-brush
Real-Life Stories About AI Taking Human Jobsby@kingabimbola
1,619 reads
1,619 reads

Real-Life Stories About AI Taking Human Jobs

by M. Abimbola MosobalajeMarch 1st, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Real life testimonies and experiments prove that AI tools have been robbing humans of jobs. First it was the reCAPTCHA images, letters, signs, and ticks to prove to machines (web hosts) that we are humans. Then it was resume “Worded” too.
featured image - Real-Life Stories About AI Taking Human Jobs
M. Abimbola Mosobalaje HackerNoon profile picture

A Short Story

Me: "Hi, sugar baby."


Friend: "Hello."


Me: "How is it going with the targets we talked about getting new clients?"


Friend: "I have been trying. Only for this AI thing... I have lost two opportunities to work already."


Me: "What do you mean, AI thing?"


Friend: "I mean, the AI tools that are used to test to see if the contents are genuine or not."


Me: "Oh, of course. So, how does that concern you? You are one of the finest and most badass writers I have had the privilege to work with over the years."


Friend: "So did I, until some content AI originality-checking tools said my works were too perfect to have been written by humans."

Intro

We have been lied to when told that AI will not take away our jobs. Real-life testimonies and experiments prove that AI tools have been robbing humans of jobs. Well…, and will continue to do so.


The above chat was a conversation between a friend, who is a colleague, and me. She writes smartly and uses different AI tools to support her content creation process, such as checking for plagiarism, fixing complex grammatical issues with content, and making content readable.


However, she has never spun an article to rewrite it or cheat the writing process. In fact, she did not bother to sign up for or use ChatGPT because she considers it a professional cheat.


She had lost two opportunities to land two potential clients because the client uploaded her content to an AI tool that could allegedly judge whether the content was AI-generated or human-generated. The client did not read the entire document.


He only lets the AI do the job, and her work came back as AI-generated.


Disappointed and confounded, she asked for the tool and tried it herself. To her surprise, the result came back affirmative as being AI-written, about 95%. She tried another, too; that had a lesser score but also said it was mostly AI-written.


As she made some adjustments and rechecked, she had a typographical error, and the score came back as 75% humanly written. She saw the error and fixed it; the score went back to 95% AI.


The logic behind this particular tool turned out to be that human-written content contains flaws and errors. Wow!


So, what is the point of grammar and other writing aid tools? It looks like the developers never thought about that.


First, it was the reCAPTCHA


I thought it was funny and ironic that machines now ask humans to confirm that they are humans. First, it was the reCAPTCHA images, letters, signs, and ticks to prove to machines (web hosts) that we are humans.

Resume Scanners/ATS


I decided to experiment with different resume tools (Applicants Tracking System, or ATS) to scan my resume and see the eligibility of my competence for a particular job description.


First, it was resume “Worded” too. Truly, the tool identified some weaknesses in the uploaded resume. My challenge, however, was that the AI could not pick up important sections of my work, such as skills.


I used "skills" and varied it with "professional skills," and "skills (professional)," but the tool couldn't even recognize these labels as headers. Its absence made it score low.


Furthermore, it identified headers (job titles and roles) as bullet points explaining job descriptions. Hence, the recommendation to start each bullet point with "action verbs" is illogical. In case you are wondering if I put a bullet before the enterprise where I had worked and the roles, no, I didn't.


There was also the issue of the brevity of points. I was to cut bullet points short, which is relative anyway. If you've ever written a resume, you know how difficult (almost impossible) it is to discuss actions, roles, and impacts of your role in measurable metrics in 15-20 words.


Ironically, in the example given by the tool on how to start a bullet point with an action word, it was longer than most of what I wrote.


Following all recommendations, the resume still got hooked on an 80-margin score for this particular tool. The tool only considers a 90% score and above for jobs.


I tried another ATS tool, whose only option was to pick up keywords in the document, not sections. It failed to use a contextual analysis of words, sentences, and phrases. It fails to identify synonyms or alternative words for all possible dictionary words that an applicant can use!


For instance, if the job description required a degree in a related course and the tool was encoded to scan for a "university degree," if you had a college degree in a related course, you could be disqualified.


However, if you mention that you worked as a teaching assistant at a university, even if the field is irrelevant, your resume will pass to the next review stage.


Taleo is one of the ATS tools used by top companies such as Toyota, Starbucks, Amazon, and several other companies. It uses the principle of keyword scan and keyword density (the frequency of keywords of interest.

Grammar and Writing Aids

In the early days of Grammarly, if you submitted your work to a client before a personal review, they would upload it to Grammarly and tell you it had a low score (often less than 90)—the same Grammarly that has trouble identifying how to use a verb following an uncountable noun.


The same Grammarly that picks popular words as inaccuracies and suggests that you delete or replace commonly-used words (not necessarily in that context). Sometimes, it has no alternative suggestions for the same word it flags as a "word choice" error.


The same AI tool that does not fully understand your writing tone or audience but picks passive voices as errors. On this, you will agree that the parameters in the "Goals" section are very limited.


Grammarly marks every passive verb form as an “issue”, disregarding the tone or setting of your document.


Some clients would even expect technical and industry-niched content to have the readability score of a basic school pupil. The same tool is the judge. Why do they think so?


Because they assume that a higher readability score ALWAYS means content can be better understood, neglecting the target audience demographics.

Plagiarism Checkers:

Copyscape Pass!!?? 🙃


"Copyscape pass" was a popular addition to many job postings on freelance platforms in those days. It didn't matter if the idea was popular, if you wrote it your way or in your style, or even if you had first-hand knowledge of the subject; you lose the job or the client if Copyscape flags it!


1text.com may also be one of the most notorious plagiarism checkers in the AI content generation tool market. This 'guy' picks on strings of words and phrases as plagiarism. Hence, it is difficult to have an accurate score if you are writing technical articles on a specific industry or tool.


For instance, in writing about JavaScript as a programming language, you must mention some words. 1text.com does not care whether your style, grammar, or context differs from other sources on the internet, nor does it care about your presentation.


As long as there are similar word strings in one or two sentences, you are a culprit ⸻a plagiarist!


RIP to the jobs AI made humans lose!

The Sun Never Rose on Some Jobs

So, our jobs are being snatched from us not because AI is replacing our many abilities as individuals but because it continues to stand in the way of opportunities that our skillset could have earned us. They are stolen right from our hands before they even land.


Yet, the AI tools themselves cannot do those tasks as well as we can.


No two AI tools are exactly the same, but we are at the mercy of fighting for slots with tools with different algorithms. It would be luck and chance that an applicant plays right into the tool the recruiter is using.


Sadly, executive customers and clients of these professionals do not know this. Typically, there are business executives who barely have enough time to perform their essential functions but believe AI is perfect and makes the best decisions.


So, as a content writer, I submit an article to a SaaS company executive, and he does not read it but uploads it to a plagiarism checker interface to check if it is original or not. Unfortunately, this man does not even know the judging criteria for what makes an AI tool reliable.


For many digital content creators, AI can be a bad market in this regard.


These tools are built with algorithms that capitalize on human weaknesses as though there are no other tools to help humans overcome their oversights and weaknesses.


It becomes apparent that the level of creative thinking of an application development team determines how the application will function. The limit of the developer's wisdom is the application’s limit.


Some deserving candidates will not advance in the industry unless humans stop jeopardizing human efforts and use meaningful algorithms and datasets to instruct their AI models. It is not AI vs. humans; it is humans vs. humans with AI being the weapon.


A person who has an idea about developing an applicant tracking system for employers has not done well if all they do is sit with a team of top HR executives on desired qualities. They must also speak with employees and job seekers.


Otherwise, those who will be employed will be those who can beat the system, not necessarily the best hands. Most of your software that undermines human efforts only want to sell more value to you in the name of a "premium" package.


Don't tell me, "those who beat the system are the best hands." Oh, please; don't get me started!

I have seen terrible product descriptions on Amazon and seen wrong information or inconsistent styles on Google's first-page results.


I have seen spelling errors and grammatical blunders on some of our most respected information outlets on the internet.


However, in a capitalist-tech world, managers and senior-level executives do not explore all the possibilities of a tool before adopting it.


As long as a tool has a huge fund-raising and is sponsored by prominent tech names, it is assumed to be an authoritative application system.

The Future

I believe the future of artificial intelligence as an independent entity is impossible. However, the greatest potential of AI tools will be realized in full collaboration with humans. It is not to marginalize humans or continue to oppose them. I believe it is to support humans.


AI startups and tech giants have to stop developing AI tools with the ambition to "catch" a culprit and instead use them as a tool to foster human collaboration and business development.


Also published here