Photo: Intelligencer; Photo: Getty Images
For a few years now, lots of people have been wondering what Sam Altman thinks about the future or perhaps what he knows about it as the CEO of OpenAI, the company that kicked off the recent AI boom. Hes been happy to tell them about the end of the world. If this technology goes wrong, it can go quite wrong, he told a Senate committee in May 2023. What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT, he said last June. A misaligned superintelligent AGI could cause grievous harm to the world, he wrote in a blog post on OpenAIs website that year.
Before the success of ChatGPT thrust him into the spotlight, he was even less circumspect. AI will probably, like, most likely lead to the end of the world, but in the meantime, therell be great companies, he cracked during an interview in 2015. Probably AI will kill us all, he joked at an event in New Zealand around the same time; soon thereafter, he would tell a New Yorker reporter about his plans to flee there with friend Peter Thiel in the event of an apocalyptic event (either there or a big patch of land in Big Sur he could fly to).Then Altman wrote on his personal blog that superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. Returning, again, to last year: The bad case and I think this is important to say is like lights out for all of us. He wasnt alone in expressing such sentiments. In his capacity as CEO of OpenAI, he signed his name to a group statement arguing that Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, alongside a range of people in and interested in AI, including notable figures at Google, OpenAI, Microsoft, and xAI.
The tech industrys next big thing might be a doomsday machine, according to the tech industry, and the race is on to summon a technology that might end the world. Its a strange mixed message, to say the least, but its hard to overstate how thoroughly the apocalypse invoked as a serious worry or a reflexive aside has permeated the mainstream discourse around AI. Unorthodox thinkers and philosophers have seen longstanding theories and concerns about superintelligence get mainstream consideration. But the end of the world has also become product-event material, fundraising fodder. In discussions about artificial intelligence, acknowledging the outside chance of ending human civilization has come to resemble a tic. On AI-startup websites, the prospect of human annihilation appears as boilerplate.
In the last few months, though, companies including OpenAI have started telling a slightly different story. After years of warning about infinite downside risk and acting as though they had no choice but to take it theyre focusing on the positive. The doomsday machine were working on? Actually, its a powerful enterprise software platform. From the Financial Times:
The San Francisco-based company said on Tuesday that it had started producing a new AI system to bring us to the next level of capabilities and that its development would be overseen by a new safety and security committee.
But while OpenAI is racing ahead with AI development, a senior OpenAI executive seemed to backtrack on previous comments by its chief executive Sam Altman that it was ultimately aiming to build a superintelligence far more advanced than humans.
Anna Makanju, OpenAIs vice-president of global affairs, told the Financial Times in an interview that its mission was to build artificial general intelligence capable of cognitive tasks that are what a human could do today.
Our mission is to build AGI; I would not say our mission is to build superintelligence, Makanju said.
The story also notes that in November, in the context of seeking more money from OpenAI partner Microsoft, Altman said he was spending a lot of time thinking about how to build superintelligence, but also, more gently, that his companys core product was, rather than a fearsome self-replicating software organism with unpredictable emergent traits, a form of magic intelligence in the sky.
Shortly after that statement, Altman would be temporarily ousted from OpenAI by a board that deemed him not sufficiently candid, a move that triggered external speculation that a major AI breakthrough had spooked safety-minded members. (More recent public statements from former board members were forceful but personal, accusing Altman of a pattern of lying and manipulation.)
After his return, Altman consolidated his control of the company, and some of his internal antagonists left or were pushed out. OpenAI then dissolved the team charged with achieving superalignment in the companys words, managing risks that could lead to the disempowerment of humanity or even human extinction and replaced it with a new safety team run by Altman himself, who also stood accused of voice theft by Scarlett Johansson. Its safety announcement was terse and notably lacking in evocative doomsaying. This committee will be responsible for making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations, the company said. While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment. Its the sort of careful, vague corporate language you might expect from a company thats comprehensively dependent on one tech giant (Microsoft) and is closing in on a massive licensing deal with its competitor (Apple).
In other news, longtime AI doomsayer Elon Musk, who co-founded OpenAI but split with the firm and later (incoherently and perhaps disingenuously) sued it for abandoning its nonprofit mission in pursuit of profit, raised $6 billion for his unapologetically for-profit competitor xAI. His grave public warnings about superintelligence now take the form of occasional X posts about memes:
There are a few different ways to process this shift. If youre deeply worried about runaway AI, this is just a short horror story in which a superintelligence is manifesting itself right in front of our eyes, helped along by the few who both knew better and were in any sort of position to stop it, in some sort of short-sighted exchange for wealth. Whats happened so far is basically compatible with your broad prediction and well-articulated warnings that far predated the current AI boom: All it took for mankind to summon a vengeful machine god was the promise of ungodly sums of money.
Similarly, if you believe in and are excited about runaway AI, this is all basically great. The system is working, the singularity is effectively already here, and failed attempts to alter or slow AI development were, in fact, near misses with another sort of disaster (this perspective exists among at least a few people at OpenAI).
If youre more skeptical of AI-doomsday predictions, you might generously credit this shift to a gradual realization among industry leaders that current generative-AI technologynow receiving hundreds of billions of dollars of investment and deployed in the wild at scaleis not careening toward superintelligence, consciousness, or rogue malice. Theyre simply adjusting their story to fit the facts of what theyre seeing.
Or maybe, for at least some in the industry, apocalyptic stories were plausible in the abstract, compelling, attention-grabbing, and interesting to talk about, and turned out to be useful marketing devices. They werestories that dovetailed nicely with the concerns of some of the domain experts they needed to work at the companies, but which seemed like harmless and ultimately cautious intellectual exercises to domain experts who didnt share them (Altman, it should be noted, is an investor and executive, not a machine-learning engineer or AI researcher). Apocalyptic warnings were an incredible framing device for a class of companies that needed to raise enormous amounts of money to function, a clever and effective way to make an almost cartoonishly brazen proposal to investors we are the best investment of all time, with infinite upside in the disarming passive voice, as concerned observers with inside knowledge of an unstoppable trend and an ability to accept capital. Routine acknowledgments of abstract danger were also useful for feigning openness to theoretical regulation help us help you avoid the end of the world! while fighting material regulation in private. They raised the stakes to intoxicating heights.
As soon as AI companies made actual contact with users, clients, and the general public, though, this apocalyptic framing flipped into a liability. It suggested risk where risk wasnt immediately evident. In a world where millions of people engage casually with chatbots, where every piece of software suddenly contains an awkward AI assistant, and where Google is pumping AI content into search pages for hundreds of millions of users to see and occasionally laugh at, the AI apocalypse can, somewhat counterintuitively, feel a bit like a non sequitur. Encounters with modern chatbots and LLM-powered software might cause users to wonder about their jobs, or trigger a general sense of wonder or unease about the future; they do not, in their current state, seem to strike fear in users hearts. Mostly, theyre showing up as new features in old software used at work.
The AI industrys sudden disinterest in the end of the world might also be understood as an exaggerated version of corporate Americas broader turn away from talking about ESG and DEI: as profit-driven, sure, but also as evidence that initial commitments to mitigating harmful externalities were themselves disingenuous and profit motivated at the time, and simply outlived their usefulness as marketing stories. It signals a loss of narrative control. In 2022, OpenAI could frame the future however it wanted. In 2024, its dealing with external expectations about the present, from partners and investors that are less interested in speculating about the future of mankind, or conceptualizing intelligence, than they are getting returns on their considerable investments, preferably within the fiscal year.
Again, none of this is particularly comforting if you think that Altman and Musk were right to warn about ending the world, even by accident, even out of craven self-interest, or if youre concerned about the merely very bad externalities the many small apocalypses that AI deployment is already producing and is likely to produce.
But AIs sudden rhetorical downgrade might be clarifying, too, at least about the behaviors of the largest firms and their leaders. If OpenAI starts communicating more like a company, it will be less tempting to mistake it for something else, as it argues for the imminence of benign but barely less speculative variation of AGI, with its softer implication of infinite returns by way of semi-apocalyptic workplace automation. If its current leadership ever believed what they were saying, theyre certainly not acting like it, and in hindsight, they never really were. The apocalypse was just another pitch. Let it be a warning about the next one.
Daily news about the politics, business, and technology shaping our world.
By submitting your email, you agree to our Terms and Privacy Notice and to receive email correspondence from us.
Original post:
What Ever Happened to the AI Apocalypse? - New York Magazine
- 12 Days: Pole Attachment Changes in 2023 Set Stage for BEAD Implementation - BroadbandBreakfast.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 - Medium [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- To Accelerate or Decelerate AI: That is the Question - Quillette [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Its critical to regulate AI within the multi-trillion-dollar API economy - TechCrunch [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- MIT Sloan Executive Education Launches New Course Offerings Focused on AI Business Strategies - AiThority [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- The Era of AI: 2023's Landmark Year - CMSWire [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- AGI Will Not Take All Our Jobs. But it will change all our jobs | by Russell Lim | Predict | Dec, 2023 - Medium [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Policy makers should plan for superintelligent AI, even if it never happens - Bulletin of the Atomic Scientists [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- 4 reasons why this AI Godfather thinks we shouldn't be afraid - TechRadar [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- AI consciousness: scientists say we urgently need answers - Nature.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- On the Eve of An A.I. Extinction Risk'? In 2023, Advancements in A.I. Signaled Promise, and Prompted Warnings from ... - The Debrief [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Superintelligence Unleashed: Navigating the Perils and Promises of Tomorrow's AI Landscape with - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Stand with beneficial AGI | Sharon Gal Or | The Blogs - The Times of Israel [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Top 5 Myths About AI Debunked. Unraveling the Truth: Separating AI | by Michiel Meire | Dec, 2023 - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- What Is Artificial Intelligence (AI)? - Council on Foreign Relations [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- AGI predictions for 2024. The major LLM players, as well as | by Paul Pallaghy, PhD | Dec, 2023 - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Meta's long-term vision for AGI: 600,000 x NVIDIA H100-equivalent AI GPUs for future of AI - TweakTown [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Chrome Will Use Experimental AI Feature to Organize Tabs, Write Reviews - CNET [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Meta is joining its two AI teams in pursuit of open-source artificial general intelligence - MediaNama.com [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Tech CEOs Agree Human-Like AI Is Inevitable. Just Don't Ask Them What That Means - The Messenger [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Dream On, Mark Zuckerberg. Your New AI Bet Is a Real Long Shot - CNET [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Demystifying AI: The ABCs of Effective Integration - PYMNTS.com [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Artificial Intelligence & The Enigma Of Singularity - Welcome2TheBronx [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Former Tesla FSD Leader, Karpathy, Offers Glimpse into the Future of Artificial General Intelligence - Not a Tesla App [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Dembski: Does the Squawk Around AI Sound Like the Tower of Babel? - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI child unveiled by award-winning Chinese scientist in Beijing - South China Morning Post [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- I Asked GPT4 To Predict The Timeline Of Breakthrough Advances In Superintelligence Until The Year - Medium [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AGI: A Big Deal(Baby) That Needs Our Attention - Medium [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Human Intelligence Is Fundamentally Different From Machine Intelligence - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- What is Artificial General Intelligence (AGI) and Why Should You Care? - AutoGPT Official - AutoGPT [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Artificial General Intelligence: Unleashing a New Era of Accessibility for People with Disabilities - Medriva [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Scholar Creates AI-Powered Simulated Child - Futurism [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AGI: Machines vs. Organisms - Discovery Institute [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Meta's AI boss says LLMs not enough: 'Human level AI is not just around the corner' - Cointelegraph [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Understanding the Intersection between AI and Intelligence: Debates, Capabilities, and Ethical Implications - Medriva [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Future of Artificial Intelligence: Predictions and Impact on Society - Medriva [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Amazon AGI Team Say Their AI Is Showing "Emergent Abilities" - Futurism [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- There's AI, and Then There's AGI: What You Need to Know to Tell the Difference - CNET [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Will Artificial Intelligence Increase Unemployment? No. - Reason [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- The Babelian Tower Of AI Alignment - NOEMA - Noema Magazine [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Elon Musk's xAI Close to Raising $6 Billion - PYMNTS.com [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Learning Before Legislating in Texas' AI Advisory Council - dallasinnovates.com [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- The Future of Generative AI: Trends, Challenges, & Breakthroughs - eWeek [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Drink the Kool-Aid all you want, but don't call AI an existential threat - Bulletin of the Atomic Scientists [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- What aren't the OpenAI whistleblowers saying? - Platformer [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Former OpenAI researcher foresees AGI reality in 2027 - Cointelegraph [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- What is Artificial Intelligence and Why It Matters in 2024? - Simplilearn [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Employees claim OpenAI, Google ignoring risks of AI and should give them 'right to warn' public - New York Post [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? - The New York Times [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- AGI in Less Than 5 years, Says Former OpenAI Employee - - 99Bitcoins [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Former OpenAI Employee Says Company Had Plan to Start AGI Bidding War With China and Russia - Futurism [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- OpenAI and Googles AI systems are powerful. Where are they taking us? - Vox.com [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Exploring Safe Superintelligence: Protecting Humanity in the Age of AI - PUNE.NEWS - PUNE.NEWS [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- The role of artificial intelligence in modern society - Business Insider Africa [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Bill Gates Says A.I.'s Impact on the Legal System Could 'Change Justice' - Observer [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Generative AI | potential in attractions industry - blooloop [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- What's the worst that AI can do to our industry (and can we prevent it)? - Unmade: media and marketing analysis [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Hottest AI Use Case For Advisors: Note-Taking Apps That Generate Action Items - Family Wealth Report [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Artificial General Intelligence, Shrinkflation and Snackable are Some of the Many New, Revised, or Updated Words and ... - LJ INFOdocket [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- A Reality Check on Superhuman AI - Nautilus - Nautilus [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- What Would You Do If You Had 8 Years Left to Live? - Uncharted Territories [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Is It Sinking In? Chatbots Will *Not* Soon Think Like Humans - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- AGI: The New Fire? Demis Hassabis Predicts AI Revolution - Wall Street Pit [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- ChatGPT Is 4 Levels Away From AGI In OpenAI's Scale - Dataconomy [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- Skild AI grabs $300M to build foundation model for robotics - Robot Report [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- 3 reasons why Artificial General Intelligence is a hallucination - Techpoint Africa [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- OpenAI is plagued by safety concerns - The Verge [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]