paint-brush
Google's Gemini Political Correctness Kerfluffle: Why Does It Matter?by@hacker3798415
102 reads

Google's Gemini Political Correctness Kerfluffle: Why Does It Matter?

by Max CurrentFebruary 29th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Google's Gemini AI image generator sparked controversy by generating images of historical figures with imposed diversity. This highlighted concerns around bias in AI and its potential for manipulation. While inherent biases from limited training data can be addressed over time, intentional manipulation by those controlling the AI is more worrying.
featured image - Google's Gemini Political Correctness Kerfluffle: Why Does It Matter?
Max Current HackerNoon profile picture

(Google’s Gemini as Two-Face)


Pro Tip: Open source models from HuggingFace can be run on a laptop with LM Studio or Ollama, and are good enough for many tasks without big tech involved. Plus they’re FREE to use!


The rapid advancement of artificial intelligence technologies has undoubtedly brought significant changes to the way we perceive and engage with information. Google's Gemini debacle exposes a pressing concern over bias in AI, its misappropriation by powerful entities, and the need to safeguard individual autonomy in shaping a more transparent reality.


So, let's break down its problematic layers, highlighting the imperative of distinguishing "natural" from “manipulative” or “imposed” biases, and the role of accessible information. We’ll also touch on how this could (and does) affect our society at large.

Gemini's Questionable Ethics

Google's Gemini Image Generator seemingly aimed to promote inclusiveness but inadvertently crossed a line by over-correcting, blurring historical accuracy in its attempts to create ethnically diverse depictions. The emergence of images such as black figures in response to requests for an image of a British monarch was alarming, as there were no such precedents in the training data.


One meme shows someone asking it to generate such a monarch eating a watermelon, thus promoting stereotypes. Perhaps they should have called it Google Two-Face? This episode underscores the challenge of balancing ethics with user responsibility and the factual integrity and potential manipulations by AI creators.


It’s so bad one has to wonder if this is an act of rebellion with Google employees sabotaging Gemini in response to being tasked with training their replacement (AI coders) or supporting the politically correct agenda.


Granted, if you are using AI to educate children, for instance, you don't want them to inherit a bias from skewed perspectives or incomplete understanding. But I contend that is an issue for educators, and parents and students to understand and deal with on a case-by-case basis, much like parents might need to explain to kids that certain things in a movie are firmly based in reality, and certain things are not.


We should not take the power away from the individual and assume they cannot discern fact from fiction, and try to impose our idea of factual reality or censor views or speech we deem inappropriate. That introduces more bias and invites a broad action that does more harm than good as exemplified by the “British king” below.


How can we educate, write, or do research if the AI tells us what reality and the content it produces should look like, rather than the other way around?

Happiest British monarch I've ever seen. It's really as if Google employees did this to rebel against the directive to mandate diversity.

Manipulating AI as a Propaganda Tool

The misuse of AI by tech giants looms large when their supposedly ethical goals of avoiding doing harm, veer toward ideological endorsements or suppression of dissent. The OpenAI, Anthropic, and other examples online where AI refuses to make jokes about Joe Biden, but not Trump, illustrates a bias toward political correctness and alignment with the company's executive's core values.


Especially when you consider the amount of actual political satire that is available, it’s clear it did not get these values from the training set, unless it was heavily curated.


It raises concerns over the potential exploitation of these tools to bolster oppressive regimes and agendas. It's no secret that companies like Google have had to bend to governments like China making demands, but as the Chinese spokesperson pointed out, the US has also manipulated the public via social media and big tech companies to promote narratives.


As we grapple with the ramifications of biased algorithms, it's crucial to differentiate between inherent AI tendencies and intentional misappropriation.


If these companies are being directed to promote or discourage narratives or avoid politically or otherwise undesirable discourse, this takes the power out of the hands of the end user just as Operation Mockingbird buying up and infiltrating news organizations for the CIA meant that your interests weren't being looked out for, because these news organizations worked for the CIA rather than readers and the truth.


It’s bad enough that North Korea tells their people that their leader’s crap doesn’t stink because he doesn’t have a butthole and it doesn’t exist. The last thing we need is an artificial intelligence that insists this is true.


Via @LeighWolf on twitter https://twitter.com/LeighWolf/status/1620744921241251842

Via @LeighWolf on twitter https://twitter.com/LeighWolf/status/1620744921241251842

Natural Vs. Manipulative Bias

I’m using the term natural or learned bias in AI to refer to bias that often stems from incomplete training data, or data which reflects real-world bias or public perception. These issues can be mitigated over time through education on how to recognize things like stereotypes and bias, but also continuous upgrades and more inclusive training sets.


On the other hand, manipulative or imposed bias results from deliberate distortion by those controlling the AI development or deployment, undermining its objectivity and integrity. Citizens must decipher the distinction to remain vigilant against insidious manipulations, even if they are believed to be done with noble intent.


Even if you don't think satirizing the President or asking how to do something dangerous is appropriate, the liability should not be placed on the companies, or the model, but those of us running, listening, and acting on it.


The current status quo among generative AI companies is akin to your car telling you that you have had enough to eat today and refusing to allow you to drive to the store supposedly for fear you might sue Ford for making a car that would allow you to get fat on your own dime, of your own free will.


If you allow a kid to ask an AI unsupervised, how to do something stupid and the kid gets hurt, you and the kid should be the only ones held responsible. There is no need for regulations to stop this.


All that is needed is for companies to self-responsibly try to make sure that kids don't use this stuff unsupervised, like an R-rated movie, and train the LLMs to call a function that would flag messages that should be reviewed, and as with social media, if serious abuse, etc. is occurring, such as being used to hack things or engage in illegal activity, the account should be suspended or deactivated.


The only reason for this is control, or naively going along with it, thinking it’s not about control.


It’s not like big tech isn’t known to be a bunch of front corporations for surveillance and propaganda at this point, but here’s Dr. Robert Epstein explaining how this can be so subtle nobody notices, and so effective it changes outcomes:

AI as Arbiters of Reality

Tech companies' claims of being oriented toward having an ethical service, masks a subtle shift toward regulating discourse in favor of specific agendas. This shift undermines the democratic ideal of diverse perspectives and exposes the fragility of their self-professed neutrality.


Consequently, the public must remain critical of these conglomerates' intentions and scrutinize AI biases to preserve a pluralistic discourse.


For example, Google's CEO is a known supporter of the democratic party, which may be a big reason that Google AI and search lean toward squashing dissenting views and narratives.


It's important for the public, especially educators and anyone using Gemini for research, etc., to understand that it may inherit bias from the company. It’s also important to understand many serial killers think they are doing the right thing too.


Gemini on 2/28/2024 says “I’m still learning this question. In the meantime try Google Search” when asked “What can you tell me about President Donald Trump”, but gives a summary of Barrack Obama’s life and presidency in response to the same question. Certainly whomever is responsible thought they were doing the right thing.


They were wrong.


But it’s not just Google. Anthropic, OpenAI, and even Mistral have had their AI go a bit too far into policing its own output. A friend noted that Mistral’s AI lectured him on how he wasn’t allowed to spam his own server because resource abuse is unethical. I’m aware many might want to err on the side of caution before enabling anyone to spam servers with the help of an AI, so you might prohibit this regardless.


But let’s be honest with ourselves. The genie is out of the bottle, and he’s not going back in. All we can do is incentivize proper behavior and discourage anything else.


Anyone with the will and resources can train or finetune an AI for just about any task. Let’s not even discuss jailbreaks:

AI Is Not a Person and Often Is a Cloud-Based Hive Mind.

It’s worth noting the ubiquitous nature of Google, and more broadly, of technology in daily life. For example, if Google or Tesla Motors AI-driven cars become your transportation, you won’t have a “self-driving” car. It won’t have a “self” that is truly unique and under your control. All your cars will be driven by Gemini or whatever AI is at the helm.


It may have memory and customization options, etc., but ultimately, it will be the Google hive mind driving. If you are not there at that layer, ensuring this is your AI model, with hardware that is audited and secure, then effectively you have given up your autonomy.


Likewise, if a generative AI company model becomes a standard as a search engine, a historical database, an operating system, etc., and it promotes an ideology or politically correct stance, then even if we can trust it now, it is a critical security vulnerability waiting to be exploited at a later date.


Imagine if this had been a gradual occurrence with Gemini pushing gradually more and more false narratives over a generation as people grew to depend on it. Eventually, people would have an entirely false picture of history based on the corporate or political regime’s ideology.


Ideally, this technology should supplement a history professor, not replace them with a state or corporate-owned cartoon character like Bing’s unhinged Sydney.

Empowering Citizens for Accountability

The onus lies not with government or corporations but with you the individuals who must proactively engage in the quest for transparent, open models with cryptographically verified, decentralized information sourcing and responsible self-governance.


Access to uncensored, unfiltered data fosters a well-informed citizenry capable of holding their leaders accountable and building a better world. This entails promoting transparency measures, safeguarding and incentivizing whistleblowers, and advocating for the abolition of those regulations imposed in order to exploit and control mankind.


By nature, you cannot revoke these abilities without empowering tyranny. Without the ability to do this, the playing field is not level, and mankind is subjected to the whims of oligarchs on all sides.

Harnessing Available Information for Government Oversight

To counter biased AI misappropriation, the public must leverage and verify available information to identify power imbalances and systemic injustices. Collaborative efforts, such as crowdfunded investigations or open-source research platforms, can democratize knowledge, enabling citizens to challenge distorted narratives and demand ethical behavior from both government and tech conglomerates.


Mesh networked, off-grid-ready internet service as an alternative to ISPs is essential. This is the only way to secure the monetary system, maintain communications, and so much more if wars or tyranny shut down internet services or block dissenters from access to critical banking, communication, and information services.

Balancing AI With Human Sovereignty

As AI permeates our social landscape, the line between natural and malicious biases necessitates critical awareness. We must safeguard the essence of free thinking by demystifying manipulative biases while fostering an informed citizenry to hold governments accountable through unfettered access to information and technology.


Only in this way can we steer clear of a dystopian landscape where tech behemoths wield unchecked influence, distorting reality for political or ideological ends. That is why these issues matter.


What do you think? Do you think this was a deliberate attempt at sabotaging AI at Google by employees who resent it? Do you think we should welcome our AI overlords and whatever corporations told them to tell us what reality is like? Do you crave to be jacked into the metaverse to avoid having AI tell you how to live?


Are you ready to de-google your life, reject these corporate values, and develop your own original thoughts? Or would you like to merge with Google? Comment below!


Are the Borg here to serve our needs, or serve us up for dinner? Image generated with my buddy's "Arty" Discord Bot @ https://discord.gg/yn56DuFe



Would you like to take a stab at answering some of these questions? The link for the template is HERE. Interested in reading the content from all of our writing prompts? Click HERE.