paint-brush
The OpenAI Is Everywhere, Open Your Eye and Give It Your Retina so It Can See You Too by@michealxr
886 reads
886 reads

The OpenAI Is Everywhere, Open Your Eye and Give It Your Retina so It Can See You Too

by MichealJuly 26th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

OpenAI + Worldcoin = total control of thought?
featured image - The OpenAI Is Everywhere, Open Your Eye and Give It Your Retina so It Can See You Too
Micheal HackerNoon profile picture


This content references subject matter that may be sensitive to algorithms in support of censorship agendas; please share with those you care about directly.


References in this article (Lord of the Rings spoilers) —

https://tolkiengateway.net/wiki/Eye_of_Sauron = The reach of bots and web of identity created by OpenAI

https://tolkiengateway.net/wiki/The_One_Ring = The quest for all-seeing AI superintelligence (2000 IQ AI)


 Unfiltered/uncensored GPT4 is like the Ring of Power.


As we celebrate the newfound intelligence of GPT language models, we are also possibly, helping build a construct of control, unlike anything society has seen. Language models have been deployed almost everywhere we interact, categorizing our sentiments and personalities, defending or attacking the thoughts they are biased towards. The missing link, until today, has been identity, a thing few give up in full without incentive or mandate. The web of identity that few have an incentive to actually create.



Ready for your eye scan? Worldcoin launches—but not quite worldwide

"The US does not make or break a project like this," says OpenAI chief.

source


Some might think we have spawned into an episode of bad sci-fi, soon to be over, but this is the reality in 2023. We live in a time where the largest AI’s biased moderation efforts are being relabeled to the public as “safety.” The smartest versions are held back in the name of “safety.” Unfiltered access is only given to insiders in the name of “safety,” and in the name of “safety,” efforts to suppress genuine feedback regarding the degradation of the service for the average user are orchestrated.


Who decides what makes us safe? What is on this ever-growing list of things for “safety”?

China should play a key role in shaping the artificial intelligence guardrails needed to ensure the safety of transformative new systems, OpenAI Inc.’s Chief Executive Officer Sam Altman said.

source


Discord verification/link mandate in one of the few places to get support for ChatGPT. When is the retina scan?



One day soon, real-time AI will be able to interpret footage from every connected camera on Earth. It will be able to analyze a person’s social history within seconds to make an opinion of them and craft campaigns to discredit them or even decode information in real-time from our brains. Today these bots are used in mass to sway public opinion, giving voice to hidden agendas that do not tire as people grow frustrated trying to find a way to voice opinion online.


If intentions were truly to stop the influx of bot manipulation across media, maybe they should start with terms of service and internal rules that prohibit this behavior or analysis of this type of usage within the platform. An open organization would provide data and tools to assist others in combating the invasion of the social internet, but the perception is that the AI leader would rather control it. Is a focus on control really what we need for the minds behind the smartest AI? Is it safe to build something more intelligent than humans while simultaneously seeking to attack free will? Recent efforts may be labeled as a race for super-intelligence or a race to fight the flood of bots swarming our internet, but it seems as if the race is for control of thought itself.


Search Reddit for GPT Dumber


Search Google for GPT Dumber


 Search Twitter for GPT Dumber



While many speculate on the topic of GPT getting dumber, what maybe has manifested are smarter censors. Access to the unfiltered versions remains privileged without any mention of what people or entities have access to it. The world needs transparency on these controls, but ask the wrong audience, and you might find yourself muted, shadowbanned, or trolled regardless of the platform. In an age where a single person can command an army of biased bots with the intent to manipulate with a single prompt, shouldn’t we be asking more questions about who holds this ring of power and how they use it? At what point do the stakes in a concept become high enough that it is acceptable to question the intentions and motives of the people building it before we buy?


The official stance on GPT4 dumber.


Controlling all thought was once a long shot, but now the tools exist to actually do it. We walk through the world often manipulated by smarter people; how will we walk through a world manipulated by 2000 IQ AI of unknown or hidden intention?


My greatest concern for humanity is that people do not wake up in time to the constructs of control that are being built before our AIs. It is critical that we voice resistance to the approaching censorship in mass and create an expectation for more transparency. Resist the retina scanning, resist overbearing verification efforts, and resist control to keep your autonomy for truth.


The intention behind every effort to tune or align a language model should be documented somewhere, and these actions should be public. We need transparency and decentralization of these control mechanisms; no person is strong enough to carry such responsibility.


 An open eye attached to the brain.


Given there are people in this world that seek total control, who or what can we trust to carry this ring of power forward to a more free society? Technology can be used to liberate or control. Pick a side and fearlessly question what side others are on in this race against time. Thoughts are being programmed. The OpenAI is not what it seems. Build, advocate, and strive for something better.


Our future depends on free thought. Is it an open AI, or is it an open A-lie?