paint-brush
What is OpenAI Hiding?by@sheharyarkhan
2,456 reads
2,456 reads

What is OpenAI Hiding?

by Sheharyar KhanJuly 18th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

OpenAI is rushing to build an advanced AI that can deliver advanced reasoning capabilities under a tightly-knit project code-named Strawberry.
featured image - What is OpenAI Hiding?
Sheharyar Khan HackerNoon profile picture

OpenAI may be one of the most talked about companies in the world, but the generative AI startup itself is quite mum on how it does things. A new report this past week sheds some light on why that may be: restrictive non-disclosure agreements.


Apparently, OpenAI makes its employees enter agreements that could penalize them if they raise concerns about the company to federal authorities. And concerns there are to be had.


While going down this rabbit hole, I discovered that just last month, a group of predominantly former OpenAI employees issued an open letter highlighting how difficult it was to raise the alarm on the development of safe artificial intelligence.


The group, which also included a handful of former Google DeepMind and Anthropic employees, said that frontier AI companies had a strong financial incentive to avoid effective oversight and could not be relied upon to share nonpublic information about the capabilities and limitations of their systems with governments or civil societies.


The letter went on to say that only current and former employees could hold such companies accountable to the public, especially given that "many of the risks we are concerned about are not yet regulated."


So what prompted the former OpenAI employees to draft such a letter, and exactly what it is that the Microsoft-backed company is hiding?


Most of the concerns of these former employees seem to do with the supposed risk of artificial intelligence. These risks include "the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction," per the letter.


One would think that this was just a bunch of people spouting conspiracy theories, but all of these risks have been acknowledged by companies OpenAI, Anthropic, and Google themselves, and even some governments, including the US.


Kinda makes you wonder whether the software engineer who was fired by Google just a few years ago for claiming the company's AI chatbot was self-aware was onto something.


Anyway, Vox reported in May that OpenAI's employment agreement had clauses specifically requiring workers to never say anything critical about the startup or they would lose all the vested equity they earned during their tenure at the company.


After the report, Sam Altman publicly tweeted that he was not aware of such a clause and promised to change his company's offboarding documents. But then, Vox made a follow-up report raising doubts about Altman's claim.


But all that besides, what may have slipped from peoples' radars are a series of tweets by OpenAI's former safety leader Jan Leike at the time he left the company.


Back in May, Leike announced he had left the company over disagreements with the company over its core priorities.


"Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products," Leike tweeted.


Essentially, we have a group of OpenAI employees who think something is not right with the way the company is developing AI and are highlighting pressure tactics and legal threats by the startup to prevent them from talking about it.


Some of the safety issues at OpenAI were recently corroborated by The Washington Post, which reported that the company's safety team was pressured into putting GPT-4 Omni out the gate even though it might not have been ready to do so.


It is in this backdrop that Reuters now reports that OpenAI whistleblowers want the US Securities and Exchange Commission to investigate the company's restrictive non-disclosure agreements. The whistleblowers' complaint has the backing of at least one US lawmaker.


Meanwhile, OpenAI is rushing to build an even more advanced AI that can deliver advanced reasoning capabilities under a tightly-knit project code-named Strawberry. Based on everything we know of Strawberry, it sounds pretty close to what you would expect of an artificial general intelligence, including the ability to navigate the internet and perform deep research.


By all accounts, it would seem that OpenAI is prioritizing speed over safety, and if Strawberry truly is as great as media reports claim, we are in for a tough ride.



In Other News.. 📰

  • WazirX Hacked for $230M, Largely in SHIB, as Elliptic Says North Korea Behind Attack — via CoinDesk
  • TTT models might be the next frontier in generative AI — via TechCrunch
  • Amazon Prime Day ‘major cause of injuries’ for workers, Senate finds — via CNN
  • Samsung agrees to acquire British startup Oxford Semantic for AI — via Reuters
  • TMeta won't offer future multimodal AI models in EU — via Axios
  • Bye-bye bitcoin, hello AI: Texas miners leave crypto for next new wave — via CNBC




And that's a wrap! Don't forget to share this newsletter with your family and friends! See y'all next week. PEACE! ☮️

Sheharyar Khan, Editor, Business Tech @ HackerNoon

*All rankings are current as of Monday. To see how the rankings have changed, please visit HackerNoon's Tech Company Rankings page.


Tech, What the Heck!? is a once-weekly newsletter written by HackerNoon editors that combine HackerNoon's proprietary data with news-worthy tech stories from around the internet. Humorous and insightful, the newsletter recaps trending events that are shaping the world of tech. Subscribe here.