paint-brush
How AI Will Impact Political Campaigns in the Near Future—It All Started With Hal-9000by@easyweb
443 reads
443 reads

How AI Will Impact Political Campaigns in the Near Future—It All Started With Hal-9000

by RishiJanuary 9th, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

From the ominous HAL-9000 in "2001: A Space Odyssey" to the present-day ChatGPT, this article explores the evolution of AI in political campaigns. Dive into the world of demographic targeting, the ethical implications of AI in politics, and the potential shift toward Artificial General Intelligence. Understand the progress, challenges, and necessary regulations in the landscape of political AI.

People Mentioned

Mention Thumbnail

Company Mentioned

Mention Thumbnail
featured image - How AI Will Impact Political Campaigns in the Near Future—It All Started With Hal-9000
Rishi HackerNoon profile picture


Content Overview

  • A space odyssey
  • Demographic targeting isn’t new
  • Targeting the old way
  • The future of AI in political campaigns and its potential impact
  • How is my local politician using AI to target me?
  • AI’s Theory of Mind
  • HAL’s big brother (Artificial General Intelligence)
  • Should I be scared?

A space odyssey

The sentient Artificial Intelligence (AI) known as HAL-9000 (Heuristically programmed ALgorithmic computer) in the popular 1969 science fiction book 2001: A Space Odyssey is a chilling story of how a computer, programmed with a simple directive, can provide companionship, guidance, and assistance, not unlike your present-day ‘Hey Google’.


In the book, the groundbreaking technology denoted as HAL is capable of natural language processing, facial recognition, and speech synthesis. These are purposeful reasons why AI currently exists, be it for improving healthcare through personalized treatments, supporting natural disaster responses using satellite imagery, or simply as a natural language chatbot to answer candidate questions for a political campaign. All good things.


Fans of the book will remember that the AI takes a turn for the worse, challenging the human’s motives to protect its existence. The story continues to show how the relationship between its creators and AI can quickly spiral into despair and desperation after a conflict occurs between humans and technology. The book exposes the ethical implications of creating AI systems that appear self-aware and have autonomy, ultimately leading to humanity’s undoing, a popular narrative amongst many science fiction AI writers.


Fast forward 50 years to the present day, and HAL makes Siri look like a chimpanzee and Google Assistant like a glorified Dr. Sbastio, (throwback to anyone from the late ’80s) so for now, we’re seemingly safe, you have nothing to worry about, and the world continues to spin without an ominous red light flashing back at us.




Demographic targeting isn’t new

This isn’t a book review on Arthur C. Clarke’s remarkably accurate prediction on the future of AI, but a post about how AI may be targeting you and your family right now, influencing your decisions based on your political stance, age, ethnicity, and sexual orientation. Demographic targeting isn’t new; before the internet, exit polls, cold calling & paper-based surveys were used to profile communities. They would subsequently be sent direct mailings and cold-called, persuaded to vote for their candidate. Some of these techniques are still used today, although sparingly, given their high degree of variance and inaccuracy.



This is where AI demographic targeting can significantly enhance accuracy. Using machine learning algorithms to analyze large volumes of user data, AI can find patterns where humans have failed. Generally speaking, the more data the model is trained on, the more accurate the predictions may be.




What can we do now, that we couldn’t do before

All your Facebook posts, LinkedIn articles, Reddit AMAs, and even that Twitter rant you sent last week can be harvested by these AI models. While this may seem terrifying, it depends on how the data is regulated. On the one hand, AI can do things that were not possible before, that is, enhance voter engagement by providing personalized information, as well as help in resource allocation by providing campaigners optimization techniques on which voter groups are more likely to be swayed, thus allowing them to prioritize groups differently. On the other, AI can also be used nefariously to indoctrinate young minds, take advantage of short attention spans, and deep fake opponents’ images to sway decisions. How all these ethical and social issues are handled is still undecided, as most countries do not have regulations in place.




Technological revolution

Artificial intelligence is transformative; it will undoubtedly change how we work and interact with technology on a monumental scale.


This is where we are so far:


Source: OurWorldInData.org. Graphics: Created by author.


Perhaps the iPhone launch wasn’t as revolutionary as the discovery of fire, but you get the picture.




Targeting the old way

Most political campaigns today take advantage of retargeting using cookies or what is known as a pixel on your machine. They follow you around the web like a leach targeting you with ads for products you already own. This is not Artificial Intelligence; this is just a dumb cookie. Ultimately this cookie tracks and retargets advertisements based on your browsing history.


Typically this was done in the 2016 & 2020 United States Primaries as ad spending shifted from traditional forms such as TV and radio to more cost-effective digital media. Outside the US, Hong Kong also increased online ad spending to four times what it did in the previous years just to promote one policy speech.


Was all this money well spent? Does targeting individuals this way reach their intended group, or does a healthy majority of the ad spend fall into an abyss of bots, fake accounts, and an unintended audience? Research done by the University of Baltimore suggests that at minimum 20% of ad spending goes to bots, ineligible voters, and other attackers using techniques such as IP masquerading. All of these could potentially be solved with AI trained on the right data, although, as I will discuss further, AI is not a panacea.


Dumb cookies & AI’s new consent bar

It is interesting to note that there are policies against placing trackers and cookies on users’ devices unless consent is given. You’re certainly aware of that pestering bar that keeps popping up all over the web, the one that you often fail to read before hastily whacking “Accept.” You can thank the Europeans for that. Although perhaps regulation is precisely what AI requires, the European Union may be exactly what we need. Their tight regulatory framework could force the rest of the world to mandate an AI policy that controls the ethical use and consent for how AI models ingest our data and disseminate information by way of Explainable AI, thus increasing trust and verifiability, more on this later.


Consider this consent bar:


If you hadn’t already noticed, that was an attempt to poke fun at the potential regulations that may be required for AI safety, as almost none currently exists. Expect a facsimile of the above to appear at websites near you; brownie points if you spotted the Borg reference.




How did we get here?

Political parties used to win races with community outreach, door-to-door canvassing, and engagement with local businesses, it took grit to get your message out, and the people these parties represent saw right through the party leaders’ B.S and, most importantly, made up their own minds with data they collected, the campaign message was human-led, with text generated from the candidate’s mouth.


The future of AI in political campaigns and its potential impact

The future of AI in political campaigns might be one we should be wary of in its current unregulated state. Instead of winning with good traditional economic policy and honest campaigning, the party with the most computing power combined with the biggest ad budget will win the race.


In a very recent case, the Republican National Committee (RNC, a major part of the US republican party) used an AI-generated video criticizing Joe Biden in a doomsday-like dystopia video with fake AI-generated images. It was reported that members of the public believed the videos were real.


Three years ago, AI was not really being utilised in election campaigns…You don’t have to be a software designer or a video editing expert to create very realistic-looking videos…We’ve gone beyond Photoshopping small parts of an image to basically generating a completely new image out of thin air.


— Darrell West, Center for Technology Innovation at the Brookings Institution


Going back to answer the question, was AI successful in reaching its demographics? Well, in the RNC case, yes, a lot of attention was drawn to that doomsday video in which a lot of voters, including democrats, believed the video was real, was it ethical? Probably not, exacerbating existing biases and reinforcing stereotypes has negative consequences for fairness and equality in the political process. Was it legal? Well, that is up to interpretation. According to the FCC, you’re pretty much allowed to lie, it would be ‘unconstitutional’ otherwise.





How is my local politician using AI to target me?

Politicians have been utilizing technology to market their platforms for years. From Obama’s 2008 ”email election” victory to Trump’s hourly tweets to the Tories (UK conservative party) 2015 Facebook campaign, there has been a massive move toward online demographic targeting, although misappropriating millions to line Mark Zuckerberg’s pocket may not seem like a scalable plan.


The future of AI in politics is predicting which group of people will likely sway to your party and what type of content will coax them. This is already being done and will be a significant timesaver for future Democratic elections. Another way AI will help is the ability to quickly find donors at a fraction of what it would typically cost. This can be achieved by using regression techniques, a type of supervised learning that trains on existing donor data. Your local politician will know now if you’re worth their time inviting you to that fancy fundraiser, the likelihood of your donation, and the approximate amount.


This doesn’t sound too good, what is the future of AI?

AI progress is inevitable and will continue to be used as a force for good as long as research emerges from the field of AI safety. Limiting the amount of identifiable data amassed from users’ machines is one regulatory hurdle. As you read this, there is a high chance of telemetry data (cite) being sent from your machine to the cloud. Your data and search queries are crowdsourced, contributing to advancements in natural language processing, deep learning, and large-scale training. Pat yourself on your back, you’re helping humanity, and your data is now part of the collective!


Knowledge is power

As the saying goes, “Knowledge is power,” thanks to your data, political campaigns are harnessing that power to gain insights into your behavior and preferences. As we have seen, campaigners can now target potential voters with surgical precision, personalize messages with increasing complexity, and have the ability to pinpoint where campaign benefactors live; this is just the tip of the AI iceberg.




AI’s Theory of Mind

The future of AI is evolving rapidly. In the last 3 years, AI has developed the Theory of Mind, a measurement of cognitive ability and strategic thinking. Scientists use this test to measure if humans or animals have self-consciousness and morality in social interactions. This same test was applied to GPT, and the results were astounding. The last test conducted in Nov 2022 concluded that GPT 3.5 had reached the strategic level of a 9-year-old child. What does this mean for demographic targeting? Well, If the president calls you to say hello, perhaps it could be AI-Joe Biden.


Source : (M.Kosinski, 2023) https://arxiv.org/abs/2302.02083Graphic: Created by author




HAL’s big brother (Artificial General Intelligence)

As AI improves through Reinforcement Learning with Human Feedback (RLHF), which is a fancy way of assigning a thumbs up or down to a prompt response, the next iteration of AI could be Artificial General Intelligence (AGI). A hypothetical HAL-like agent that theoretically can accomplish any task autonomously without intervention. How this progress to AGI will affect the world is currently unknown, although many, including OpenAI, the company behind the infamous GPT, are urging clear regulation.


How can this technology be improved?

Whether AI is used for demographic targeting, natural language processing for virtual assistants, weather forecasting, or healthcare, there is much improvement to be made. AI is still embryonic, and regulation will need to be in lockstep with the fast pace of iteration. As Satya Nadela puts it, we’re still in the early stages of AI development:


We’ve gone from the bicycle to the steam engine with the launch of ChatGPT3

— Satya Nadela, CEO, Microsoft — May 2023 Build Conference


Improvements are needed in Long-term safety, a key concern of Microsoft in its recent build conference. “” includes safe testing of user data and monitoring of inconsistencies in NPL to detect violence, self-harm, and hate speech.



Explainable AI

Furthermore, trust needs to be reinforced by way of Explainable Artificial Intelligence (XAI), which is the process that allows users to trust the output of machine learning algorithms. What ChatGPT believes to be true may not actually be true, and you, as a consumer, should be able to verify how the result of the algorithm came to be. Deep fake political videos could include fine print as to how the video was generated and what data is fact or fiction. A lot needs to be done to create a more transparent and reliable AI system that is safe for the general public to use.



Should I be scared?

You need not be scared but educated and vigilant, look out for red flags, read the fine print, and know how the AI results are generated. Protect minors and teach them about the pitfalls and inaccuracies of such technologies.


Know that propaganda can come in many forms, including something that may look real, feel real but entirely AI-generated. The last of human-led elections has begun, and you will be targeted.


First, they ignore you, then they laugh at you, then they fight you, then you win.


– Anonymous, but it’s highly possible it was written by HAL-9000.


Originally posted here.