Eleven current and former employees of OpenAI, along with two more from Google DeepMind, posted an open letter today stating that they are unable to voice concerns about risks created by their employees due to confidentiality agreements. Today lets talk about what they said, what they left out, and why lately the AI safety conversation feels like its going nowhere.
Heres a dynamic weve seen play out a few times now at companies including Meta, Google, and Twitter. First, in a bid to address potential harms created by their platforms, companies hire idealistic workers and charge them with building safeguards into their systems. For a while, the work of these teams gets prioritized. But over time, executives enthusiasm wanes, commercial incentives take over, and the team is gradually de-funded.
When those roadblocks go up, some of the idealistic employees will speak out, either to a reporter like me, or via the sort of open letter that the AI workers published today. And the company responds by reorganizing the team out of existence, while putting out a statement saying that whatever that team used to work on is now everyones responsibility.
At Meta, this process gave us the whistleblower Frances Haugen. On Googles AI ethics team, a slightly different version of the story played out after the firing of researcher Timnit Gebru. And in 2024, the story came to the AI industry.
OpenAI arguably set itself up for this moment more than those other tech giants. After all, it was established not as a traditional for-profit enterprise, but as a nonprofit research lab devoted to safely building an artificial general intelligence.
OpenAIs status as a relatively obscure nonprofit changed forever in November 2022. Thats when it released ChatGPT, a chatbot based on the latest version of its large language model, which by some estimates soon became the fastest-growing consumer product in history.
ChatGPT took a technology that had been exclusively the province of nerds and put it in the hands of everyone from elementary school children to state-backed foreign influence operations. And OpenAI soon barely resembled the nonprofit that was founded out of a fear that AI poses an existential risk to humanity.
This OpenAI placed a premium on speed. It pushed the frontier forward with tools like plugins, which connected ChatGPT to the wider internet. It aggressively courted developers. Less than a year after ChatGPTs release, the company a for-profit subsidiary of its nonprofit parent was valued at $90 billion.
That transformation, led by CEO Sam Altman, gave many in the company whiplash. And it was at the heart of the tensions that led the nonprofit board to fire Altman last year, for reasons related to governance.
The five-day interregnum between Altmans firing and his return marked a pivotal moment for the company. The board could have recommitted to its original vision of slow, cautious development of powerful AI systems. Or it could endorse the post-ChatGPT version of OpenAI, which closely resembled a traditional Silicon Valley venture-backed startup.
Almost immediately, it became clear that a vast majority of employees preferred working at a more traditional startup. Among other things, that startups commercial prospects meant that their (unusual) equity in the company would be worth millions of dollars. The vast majority of OpenAI employees threatened to quit if Altman didnt return.
And so Altman returned. Most of the old board left. New, more business-minded board members replaced them. And that board has stood by Altman in the months that followed, even as questions mount about his complex business dealings and conflicts of interest.
Most employees seem content under the new regime; positions at OpenAI are still highly sought after. But like Meta and Google before it, OpenAI had its share of conscientious objectors. And increasingly, were hearing what they think.
The latest wave began last month when OpenAI co-founder Ilya Sutskever, who initially backed Altmans firing and who had focused on AI safety efforts, quit the company. He was followed out the door by Jan Leike, who led the superalignment team, and a handful of other employees who worked on safety.
Then on Tuesday a new group of whistleblowers came forward to complain. Heres handsome podcaster Kevin Roose in the New York Times:
They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.
OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there, said Daniel Kokotajlo, a former researcher in OpenAIs governance division and one of the groups organizers.
Anyone looking for jaw-dropping allegations from the whistleblowers will likely leave disappointed. Kokotajlos sole specific complaint in the article is that some employees believed Microsoft had released a new version of GPT-4 in Bing without proper testing; Microsoft denies that this happened.
But the accompanying letter offers one possible explanation for why the charges feel so thin: employees are forbidden from saying more by various agreements they signed as a condition of working at the company. (The company has said it is removing some of the more onerous language from its agreements, after Vox reported on them last month.)
Were proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk, an OpenAI spokeswoman told the Times. We agree that rigorous debate is crucial given the significance of this technology, and well continue to engage with governments, civil society and other communities around the world.
The company also created a whistleblower hotline for employees to anonymously voice their concerns.
So how should we think about this letter?
I imagine that it will be a Rorschach test for whoever reads it, and what they see will depend on what they think of the AI safety movement in general.
For those who believe that AI poses existential risk, I imagine this letter will provide welcome evidence that at least some employees inside the big AI makers are taking those risks seriously. And for those who dont, I imagine it will provide more ammunition for the argument that the AI doomers are once again warning about dire outcomes without providing any compelling evidence for their beliefs.
As a journalist, I find myself naturally sympathetic to people inside companies who warn about problems that havent happened yet. Journalism often serves a similar purpose, and every once in a while, it can help prevent those problems from occurring. (This can often make the reporter look foolish, since they spent all that time warning about a scenario that never unfolded, but thats a subject for another day.)
At the same time, theres no doubt that the AI safety argument has begun to feel a bit tedious over the past year, when the harms caused by large language models have been funnier than they have been terrifying. Last week, when OpenAI put out the first account of how its products are being used in covert influence operations, there simply wasnt much there to report.
Weve seen plenty of problematic misuse of AI, particularly deepfakes in elections and in schools. (And of women in general.) And yet people who sign letters like the one released today fail to connect high-level hand-wringing about their companies to the products and policy decisions that their companies make. Instead, they speak through opaque open letters that have surprisingly little to say about what safe development might actually look like in practice.
For a more complete view of the problem, I preferred another (and much longer) piece of writing that came out Tuesday. Leopold Aschenbrenner, who worked on OpenAIs superalignment team and was reportedly fired for leaking in April, published a 165-page paper today laying out a path from GPT-4 to superintelligence, the dangers it poses, and the challenge of aligning that intelligence with human intentions.
Weve heard a lot of this before, and the hypotheses remain as untestable (for now) as they always have. But I find it difficult to read the paper and not come away believing that AI companies ought to prioritize alignment research, and that current and former employees ought to be able to talk about the risks they are seeing.
Navigating these perils will require good people bringing a level of seriousness to the table that has not yet been offered, Aschenbrenner concludes. As the acceleration intensifies, I only expect the discourse to get more shrill. But my greatest hope is that there will be those who feel the weight of what is coming, and take it as a solemn call to duty.
And if those who feel the weight of what is coming work for an AI company, it seems important that they be able to talk about what theyre seeing now, and in the open.
For more good posts every day, follow Caseys Instagram stories.
(Link)
(Link)
Send us tips, comments, questions, and situational awareness: casey@platformer.news and zoe@platformer.news.
Visit link:
What aren't the OpenAI whistleblowers saying? - Platformer
- 12 Days: Pole Attachment Changes in 2023 Set Stage for BEAD Implementation - BroadbandBreakfast.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 - Medium [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- To Accelerate or Decelerate AI: That is the Question - Quillette [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Its critical to regulate AI within the multi-trillion-dollar API economy - TechCrunch [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- MIT Sloan Executive Education Launches New Course Offerings Focused on AI Business Strategies - AiThority [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- The Era of AI: 2023's Landmark Year - CMSWire [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- AGI Will Not Take All Our Jobs. But it will change all our jobs | by Russell Lim | Predict | Dec, 2023 - Medium [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Policy makers should plan for superintelligent AI, even if it never happens - Bulletin of the Atomic Scientists [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- 4 reasons why this AI Godfather thinks we shouldn't be afraid - TechRadar [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- AI consciousness: scientists say we urgently need answers - Nature.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- On the Eve of An A.I. Extinction Risk'? In 2023, Advancements in A.I. Signaled Promise, and Prompted Warnings from ... - The Debrief [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Superintelligence Unleashed: Navigating the Perils and Promises of Tomorrow's AI Landscape with - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Stand with beneficial AGI | Sharon Gal Or | The Blogs - The Times of Israel [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Top 5 Myths About AI Debunked. Unraveling the Truth: Separating AI | by Michiel Meire | Dec, 2023 - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- What Is Artificial Intelligence (AI)? - Council on Foreign Relations [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- AGI predictions for 2024. The major LLM players, as well as | by Paul Pallaghy, PhD | Dec, 2023 - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Meta's long-term vision for AGI: 600,000 x NVIDIA H100-equivalent AI GPUs for future of AI - TweakTown [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Chrome Will Use Experimental AI Feature to Organize Tabs, Write Reviews - CNET [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Meta is joining its two AI teams in pursuit of open-source artificial general intelligence - MediaNama.com [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Tech CEOs Agree Human-Like AI Is Inevitable. Just Don't Ask Them What That Means - The Messenger [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Dream On, Mark Zuckerberg. Your New AI Bet Is a Real Long Shot - CNET [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Demystifying AI: The ABCs of Effective Integration - PYMNTS.com [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Artificial Intelligence & The Enigma Of Singularity - Welcome2TheBronx [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Former Tesla FSD Leader, Karpathy, Offers Glimpse into the Future of Artificial General Intelligence - Not a Tesla App [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Dembski: Does the Squawk Around AI Sound Like the Tower of Babel? - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI child unveiled by award-winning Chinese scientist in Beijing - South China Morning Post [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- I Asked GPT4 To Predict The Timeline Of Breakthrough Advances In Superintelligence Until The Year - Medium [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AGI: A Big Deal(Baby) That Needs Our Attention - Medium [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Human Intelligence Is Fundamentally Different From Machine Intelligence - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- What is Artificial General Intelligence (AGI) and Why Should You Care? - AutoGPT Official - AutoGPT [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Artificial General Intelligence: Unleashing a New Era of Accessibility for People with Disabilities - Medriva [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Scholar Creates AI-Powered Simulated Child - Futurism [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AGI: Machines vs. Organisms - Discovery Institute [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Meta's AI boss says LLMs not enough: 'Human level AI is not just around the corner' - Cointelegraph [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Understanding the Intersection between AI and Intelligence: Debates, Capabilities, and Ethical Implications - Medriva [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Future of Artificial Intelligence: Predictions and Impact on Society - Medriva [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Amazon AGI Team Say Their AI Is Showing "Emergent Abilities" - Futurism [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- There's AI, and Then There's AGI: What You Need to Know to Tell the Difference - CNET [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Will Artificial Intelligence Increase Unemployment? No. - Reason [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- The Babelian Tower Of AI Alignment - NOEMA - Noema Magazine [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Elon Musk's xAI Close to Raising $6 Billion - PYMNTS.com [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Learning Before Legislating in Texas' AI Advisory Council - dallasinnovates.com [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- The Future of Generative AI: Trends, Challenges, & Breakthroughs - eWeek [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Drink the Kool-Aid all you want, but don't call AI an existential threat - Bulletin of the Atomic Scientists [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- What Ever Happened to the AI Apocalypse? - New York Magazine [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Former OpenAI researcher foresees AGI reality in 2027 - Cointelegraph [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- What is Artificial Intelligence and Why It Matters in 2024? - Simplilearn [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Employees claim OpenAI, Google ignoring risks of AI and should give them 'right to warn' public - New York Post [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? - The New York Times [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- AGI in Less Than 5 years, Says Former OpenAI Employee - - 99Bitcoins [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Former OpenAI Employee Says Company Had Plan to Start AGI Bidding War With China and Russia - Futurism [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- OpenAI and Googles AI systems are powerful. Where are they taking us? - Vox.com [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Exploring Safe Superintelligence: Protecting Humanity in the Age of AI - PUNE.NEWS - PUNE.NEWS [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- The role of artificial intelligence in modern society - Business Insider Africa [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Bill Gates Says A.I.'s Impact on the Legal System Could 'Change Justice' - Observer [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Generative AI | potential in attractions industry - blooloop [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- What's the worst that AI can do to our industry (and can we prevent it)? - Unmade: media and marketing analysis [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Hottest AI Use Case For Advisors: Note-Taking Apps That Generate Action Items - Family Wealth Report [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Artificial General Intelligence, Shrinkflation and Snackable are Some of the Many New, Revised, or Updated Words and ... - LJ INFOdocket [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- A Reality Check on Superhuman AI - Nautilus - Nautilus [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- What Would You Do If You Had 8 Years Left to Live? - Uncharted Territories [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Is It Sinking In? Chatbots Will *Not* Soon Think Like Humans - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- AGI: The New Fire? Demis Hassabis Predicts AI Revolution - Wall Street Pit [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- ChatGPT Is 4 Levels Away From AGI In OpenAI's Scale - Dataconomy [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- Skild AI grabs $300M to build foundation model for robotics - Robot Report [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- 3 reasons why Artificial General Intelligence is a hallucination - Techpoint Africa [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- OpenAI is plagued by safety concerns - The Verge [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]