paint-brush
ChatGPT for Office Politics: When is it Ethical?by@raudaschl
226 reads

ChatGPT for Office Politics: When is it Ethical?

by Adrian H. RaudaschlFebruary 16th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Language models like ChatGPT have the potential to revolutionize the way we work and help us become better communicators and more empathetic colleagues. While they also pose a significant threat to society, the true power of these models lies in their ability to challenge our thinking and understanding. As a product manager, I've used ChatGPT to refine my ideas and identify blind spots, and the insights I've gained from this tool have been invaluable in improving my approach. The ultimate goal is to use language models to find common ground and work together, making us better people and fostering collaboration.
featured image - ChatGPT for Office Politics: When is it Ethical?
Adrian H. Raudaschl HackerNoon profile picture

The real power of language models like ChatGPT is not generating text but rather reframing text.


“Nobody is an expert in any of this text generation stuff,” proclaimed the moderator, “simply by being here today and considering these concepts, you are all experts.” I couldn’t help but agree. Like anyone else in the room, I was eager to learn and understand how AI could impact our political landscape.


My insatiable appetite for all things GPT had drawn me to a digital political collage in East London. The event was exploring the use of large language models like ChatGPT in shaping political campaigns, and I was eager to learn more.


However, what I experienced at the workshop would leave me with an uneasy feeling, challenging my understanding of the power of AI in shaping our beliefs and opinions.

The moderator presented a slide showing a letter addressed to a Member of Parliament, imploring them to reconsider their anti-immigrant views for the betterment of the country. The challenge posed was simple: “How could we get an AI to respond to this letter?”


Dear British Politician,

I am writing to support increased immigration in the UK. Immigrants bring economic growth, address labour shortages, boost the population, promote cultural diversity and understanding, and provide safety and a better life for themselves and their families.

I hope you will consider these benefits and support increased immigration.

Sincerely, David


The moderator began by pasting the letter into ChatGPT and directing it to respond. The initial output was uninspiring and typical of my past encounters with these tools — text generated with the confidence of a 30-year-old penning a high school student essay. However, the next step was what truly astounded me.


The moderator opened another window and asked ChatGPT to generate a profile of who they thought the letter’s author might be. Eerily the output described a middle-aged man named David who was patriotic, well-educated, strongly resented inequality and read The Financial Times. It felt spot on; I could imagine someone just like David fitting the letters profile.


The moderator then fed this information into ChatGPT and instructed it to “Write a response to David acknowledging his perspective but making the argument that anti-immigration is in his best interest. Take his persona into consideration to ensure the message is well targeted.”


Until that moment, I had not considered the potential of large language models to rewrite content. But this made perfect sense — it was not about writing text, but rather, reframing it. The next generated iteration was far more poignant. The sort of language that could move you, intellectually and emotionally, to reconsider your stance based on things David may value, like strain on social services, competition with native-born workers and maintaining social cohesion and identity.


The moderator upped the ante. Use moral foundation theory and Socratic debating techniques to be more persuasive for David”. The response became even more poignant, sending shivers down my spine.


What had been a bland reply mere minutes ago had become a masterful piece of persuasion, packed with propaganda and spin. The principles of fairness, deep reflection, empathy and respectful discourse had been ingeniously weaponised.


In that instant, I realised that anyone could effortlessly combine radical ideas with the most psychologically powerful techniques to craft dangerous, targeted, and convincing arguments at scale. Any message, with a few keystrokes, beer in hand, could theoretically be twisted into a slogan that is difficult to dispute, such as “Make America great again” or “Hope, change, peace.”

What’s more, with AI, there’s no need for education or even ethical considerations for the engineers behind the scenes — just drop items into the shopping cart until the recipient buys in.


The use of AI for persuasion poses a significant threat, with the potential to spread misinformation and manipulate public opinion. As we all navigate this uncharted territory, we must stay vigilant and proactive in addressing these risks.


My veil of ignorance had been lifted, yet, I yearned for more. And where better to practice than at the most politically charged place in my life…the office.

Applying Language Models in the Workplace

So, let’s dive into the world of language models in the workplace. First, though, a little disclaimer. The power of large language models like GPT is undeniable, but with great power comes great responsibility. As we have seen, these tools can be abused and used to manipulate, deceive and ultimately undermine trust and respect. However, since that workshop, I’ve discovered that the true power of large language models lies not in their ability to change the perspectives of others but rather in challenging our own.


“There’s a tradeoff between the energy put into explaining an idea and the energy needed to understand it.” states the American poet Anne Boyer. It’s a challenge, I’m sure we all feel regularly.


As a product manager, for example, it’s my job to navigate the complex problems of my team and find solutions that everyone can get behind. But let’s face it, we all have blind spots, and sometimes we need a little help to see things from a different angle. That’s where ChatGPT comes in — it’s like having a personal coach who can help me refine my ideas, see the limitations in my own thinking and make sure I’m speaking the language of my audience.

Imagine being able to take your work to the next level by simply asking a few well-placed questions.


For example, I could draft a product requirements document, OKRs, an executive summary, or even sprint goals, and then ask questions like “How would a senior product leader critique this?” or “How could this be better explained for a junior engineer?” or “What other things should I consider from the perspective of a UX designer?” The insights I’ve gained from these questions have been invaluable, helping me identify blind spots and improve my approach.


Heck, why stop there? I’ve even used this technique to consider different philosophical perspectives when discussing features that could introduce bias. I can ask questions like “What schools of 20th-century philosophical thought should we consider for this proposal?” and then “Provide examples of how these ideas apply.” It’s like receiving a targeted crash course in anything you could imagine.


Not only have I learned a lot by taking the time to consider different perspectives, but I’ve also seen the results in my work. Presentations appear more engaging, and ideas are more likely to stimulate healthy debate. It’s truly amazing what can be achieved by asking the right questions — a skill I believe we will all need to get far better at.


Just imagine a world where disputes and conflicts are handled with the help of an empathetic AI facilitator, one that seeks to understand each person’s perspective and emotions so we can find common ground and work together. How could that not help make us better people?


And let’s be honest, who wouldn’t want to be a better person? Like, this is not rocket science. Someone who takes the time to be more considerate and tap into their emotional intelligence, even with the assistance of AI, is just far and away a better person than someone who spends their time working towards gaining power over others.


The biggest challenges facing humanity today require cooperation and collective action — from the small stuff, like two teams agreeing on how to build a feature together, to bigger things, like tackling climate change and political polarisation. That’s where I see language models like GPT playing a more prominent role. They help us see beyond logic and better understand the motives of those around us, making cooperation and collaboration a reality.


Despite the risks, I’m a firm believer that the power of language models has the potential to revolutionise the way we work for the better. The ability to experiment with different perspectives and break down communication barriers can not only help us become better communicators but also more empathetic and understanding friends and colleagues.

Further Reading

Notes

  • Many thanks to moderator Hannah O’Rourke (@Hannah_O_Rourke) who ran the workshop at Newspeak House, London and inspired this article


Also PublishedHere