paint-brush
Adapting to AI-Powered Workplaces and an Automated Futureby@mikeszczesnyedco
313 reads
313 reads

Adapting to AI-Powered Workplaces and an Automated Future

by Mike SzczesnyJuly 12th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Generative artificial intelligence (AI) can take hours of manpower out of tedious, repetitive tasks. The quality of the data going into generative AI engines is one major limitation of the technology. Employers and employees alike need to adhere to strict digital ethics guidelines. Workers just need to reframe their thinking to work in tandem with AI.
featured image - Adapting to AI-Powered Workplaces and an Automated Future
Mike Szczesny HackerNoon profile picture



Since the widespread use of artificial intelligence engines gained popularity in 2022, the whispered fear of “robots taking our jobs” has heightened to full-on protest. The most concern revolves around generative artificial intelligence, such as the widely used ChatGPT and Midjourney tools. Naturally, content producers are among the most vocal protestors. After all, it’s been shown that AI takes hours of manpower out of tedious, repetitive tasks. Among other benefits of similar tools, generative AI speeds up coding and development and lowers the barrier of entry for beginner developers.


Ethical and Practical Concerns

But it is not flawless.


In their 2021 exploration of how AI will impact the workplace, Ashley Stahl comments that the quality of the data going into generative AI engines is one major limitation of the technology. Intrinsic biases in data are reflected in the content produced, and the technology cannot distinguish false information from true. And in 2023, a lawyer used ChatGPT to generate a response containing six cases that didn’t exist.


In developing code, even minor errors translate to an extreme vulnerability risk. Security, along with flawed data concerns, is among other ethical considerations of an AI-powered workplace, including:


  • The data privacy risks this technology poses include the insufficient anonymization techniques and vulnerability of user data within these tools themselves, unauthorized sharing of data with third parties, and failure to obtain proper consent from users.


  • Maintaining trust with clientele and product users by being transparent about AI being used concerning their personal data, services received, and so on.


Employers and employees alike need to adhere to strict digital ethics guidelines. Guidance should emphasize using confidential information and adequate quality control to recognize and address vulnerabilities. Proper guidance should include methods of mitigating the use of data rooted in bias and discrimination.


What Does This Mean for Employment?

These fallacies in AI demonstrate that, while it can reduce redundancy, it requires a human touch–and this can open the door to entirely new opportunities in the workplace for our current and incoming workforce.


Amid the threat of job automation, opportunities for upskilling and new career paths are plentiful. New roles are already popping up. Workers just need to reframe their thinking to work in tandem with AI.


Stahl illustrates the differences between human and artificial intelligence. She states that AI's “specialized” intelligence is rigid and cannot function outside their programmed “thinking.” Alternatively, humans can use judgment and abstract thinking to consider outside circumstances.


Training within a purely human skill set, such as analytics, should be pursued. Certificate programs, on-the-job training… There are increasing opportunities for reskilling to work in tandem with AI.


Employee Performance and Management

Employers are also on the hook to nurture a fresh perspective. Consider performance metrics. Productivity is easily measured when there are quantifiable outputs, such as the number of goods and services produced. Employee performance can be tied to a baseline, minimum output.


How is employee performance defined with the potential for AI and robotic process automation to take over mundane, repetitive tasks tied to these outputs? And how can employees be confident in their value?


Traditional metrics, such as hours worked and units produced, may no longer apply. Look again at the emphasis on “human” capabilities like soft skills and emotional intelligence. Metrics should be based on outcomes rather than products and processes–metrics that measure that “human touch” that is so important in an AI-powered workplace. Employers should be considerate and sensitive to the tumultuous landscape that today’s workforce navigates.


Opportunities for professional development, particularly about an AI-powered workplace, and corporate awards and recognition to improve morale are both methods for smoothly adapting to automation in the workplace. After all, that’s perhaps the most important ethical consideration of the automated workplace of the future: maintaining that personal, considerate approach that only humans can bring to the table.