There have been a number of articles published on Hackernoon which have either supported or discouraged the mass adoption of facial recognition technology.
In most cases, the topic was about the adoption of facial recognition tech on a state level, which would ensure better surveillance but worse personal freedom in the long run. And although I lean towards the “ban state facial recognition” kind of argument, I can’t deny that there are indeed some kinds of benefits to using it in the private sector rather than the state sector.
One such argument that dawned on my mind was when I was working as an HR and most of my salary was dependent on the retention level I could keep in the company. This meant that I needed to only hire people that were a 100% match to the job description. This meant several years of experience and results to show for it.
Needless to say, I failed miserably as I was very easily lied to, only to find out that the person with 3 years of experience had only watched a Udemy course in the past.
The second argument I came across when working with a local game development studio. The issue first came up when our game testers were finding it extremely hard to phrase their experiences with the game, thus deriving the development team the essential feedback they needed to make small tweaks here and there.
Therefore, I’m going to showcase my two arguments as to why facial recognition should be adopted in the private sector rather than the state sector.
Here goes nothing.
The main issue as already outlined in the intro was that getting actual feedback from game testers was very hard. The reason was that even though our brain experiences different emotions during different stages of the game, it’s a bit hard to phrase it objectively.
Most people struggled to say which part frustrated them, or which one was very emotional. Overall, it comes down to people being limited to supply data at an accurate level.
However, displaying the facial recognition technology which has been programmed to identify emotions beforehand will significantly reduce the time and effort spent on identifying flaws in the game’s design or correcting the pacing.
One such example was voiced by Simon Welsh, a software contractor at Playamo Australia, who said:
“When we as a platform start looking for new games to add, it is always helpful if the research about player emotional feedback had been done in the past.
There is absolutely no case when we accept a game that had not been tested beforehand. Not because of the bugs or technical difficulties, but because of the gameplay. If we aren’t at least 99% sure that people like playing this game, there’s no chance we’re going to list it.
We do this because, even though we have nothing to do with the development of these games, it’s our platform that the game is on, therefore the player will correlate the flaws of the game with our brand, thus damaging it.
I’ve heard of facial recognition technology in this sector but never seen its results. As long as it can speed up the process and give accurate results, we’re more than happy to accept it in the industry”
The issue with this argument is that facial recognition technology is very expensive, while game testers are extremely cheap. It all comes down to how developers prefer to invest their resources, chasing value fully, or trying to fit in a budget.
It may seem like I’m salty for losing a job as an HR, but if that hadn’t happened I wouldn’t have discovered the amazing world of tech, so I’m kind of grateful for those unfilled quotas of new employees.
However, people who want to be an HR specialist for the rest of their lives may feel a bit more radical about those issues. It’s not that there are no jobs out there, it’s that HRs have to face insurmountable demands from their supervisors to find the best of the best, which is why glancing over a 1-year experienced college graduate is often the case.
However, much like I had the tumble of being lied to, so do other HR specialists in various fields. Interviewees have already recognized what the interviewer wants to hear, so they structure their answers accordingly in order to maximize their chances of getting a job.
As a human, it takes immense amounts of experience and knowledge to identify when somebody is lying to you, especially when they’re good at it. But for a computer, that would would be just the importation of the software we already have.
Installing facial recognition cameras as well as speech recognition microphones near the interviewee will help the software analyze whether or not the candidate is lying. We’ve already reached a point in tech advancement that software can differentiate nervousness from a lie, thus maximizing accuracy.
Some lawmakers think that it’s immoral to record somebody without their consent, but here’s the counter-argument. Don’t do it without their consent. The fact that companies keep a person’s CV in their databases for years and years is already a “data leak” in itself. Why would a recorded video that will be used maybe once or twice be any different?
It’s about maximizing the workforce recruitment capabilities and maybe introducing something new to how interviews are being conducted.
If it’s still too much for people to bear, simply make a law where the company has to delete the footage within a month of the interview or be fined significantly.
Despite the fact that facial recognition on surveillance cameras could help prevent crime or speed up investigations, the amount of privacy we will be sacrificing isn’t necessarily worth it.
Government databases have been hacked in the past and they are still susceptible today. Having such large amounts of data stored in a single place opens it up to a disastrous hack.
Furthermore, nobody wants to be blacklisted just because they participated in a protest or advertised to just because they pass a library on their way to work.
We need to draw the line where enough is enough and no, facial recognition is not the new fingerprint, it’s completely different.