Katja Graces apartment, in West Berkeley, is in an old machinists factory, with pitched roofs and windows at odd angles. It has terra-cotta floors and no central heating, which can create the impression that youve stepped out of the California sunshine and into a duskier place, somewhere long ago or far away. Yet there are also some quietly futuristic touches. High-capacity air purifiers thrumming in the corners. Nonperishables stacked in the pantry. A sleek white machine that does lab-quality RNA tests. The sorts of objects that could portend a future of tech-enabled ease, or one of constant vigilance.
Grace, the lead researcher at a nonprofit called A.I. Impacts, describes her job as thinking about whether A.I. will destroy the world. She spends her time writing theoretical papers and blog posts on complicated decisions related to a burgeoning subfield known as A.I. safety. She is a nervous smiler, an oversharer, a bit of a mumbler; shes in her thirties, but she looks almost like a teen-ager, with a middle part and a round, open face. The apartment is crammed with books, and when a friend of Graces came over, one afternoon in November, he spent a while gazing, bemused but nonjudgmental, at a few of the spines: Jewish Divorce Ethics, The Jewish Way in Death and Mourning, The Death of Death. Grace, as far as she knows, is neither Jewish nor dying. She let the ambiguity linger for a moment. Then she explained: her landlord had wanted the possessions of the previous occupant, his recently deceased ex-wife, to be left intact. Sort of a relief, honestly, Grace said. One set of decisions I dont have to make.
She was spending the afternoon preparing dinner for six: a yogurt-and-cucumber salad, Impossible beef gyros. On one corner of a whiteboard, she had split her pre-party tasks into painstakingly small steps (Chop salad, Mix salad, Mold meat, Cook meat); on other parts of the whiteboard, shed written more gnomic prompts (Food area, Objects, Substances). Her friend, a cryptographer at Android named Paul Crowley, wore a black T-shirt and black jeans, and had dyed black hair. I asked how they knew each other, and he responded, Oh, weve crossed paths for years, as part of the scene.
It was understood that the scene meant a few intertwined subcultures known for their exhaustive debates about recondite issues (secure DNA synthesis, shrimp welfare) that members consider essential, but that most normal people know nothing about. For two decades or so, one of these issues has been whether artificial intelligence will elevate or exterminate humanity. Pessimists are called A.I. safetyists, or decelerationistsor, when theyre feeling especially panicky, A.I. doomers. They find one another online and often end up living together in group houses in the Bay Area, sometimes even co-parenting and co-homeschooling their kids. Before the dot-com boom, the neighborhoods of Alamo Square and Hayes Valley, with their pastel Victorian row houses, were associated with staid domesticity. Last year, referring to A.I. hacker houses, the San Francisco Standard semi-ironically called the area Cerebral Valley.
A camp of techno-optimists rebuffs A.I. doomerism with old-fashioned libertarian boomerism, insisting that all the hand-wringing about existential risk is a kind of mass hysteria. They call themselves effective accelerationists, or e/accs (pronounced e-acks), and they believe A.I. will usher in a utopian futureinterstellar travel, the end of diseaseas long as the worriers get out of the way. On social media, they troll doomsayers as decels, psyops, basically terrorists, or, worst of all, regulation-loving bureaucrats. We must steal the fire of intelligence from the gods [and] use it to propel humanity towards the stars, a leading e/acc recently tweeted. (And then there are the normies, based anywhere other than the Bay Area or the Internet, who have mostly tuned out the debate, attributing it to sci-fi fume-huffing or corporate hot air.)
Graces dinner parties, semi-underground meetups for doomers and the doomer-curious, have been described as a nexus of the Bay Area AI scene. At gatherings like these, its not uncommon to hear someone strike up a conversation by asking, What are your timelines? or Whats your p(doom)? Timelines are predictions of how soon A.I. will pass particular benchmarks, such as writing a Top Forty pop song, making a Nobel-worthy scientific breakthrough, or achieving artificial general intelligence, the point at which a machine can do any cognitive task that a person can do. (Some experts believe that A.G.I. is impossible, or decades away; others expect it to arrive this year.) P(doom) is the probability that, if A.I. does become smarter than people, it will, either on purpose or by accident, annihilate everyone on the planet. For years, even in Bay Area circles, such speculative conversations were marginalized. Last year, after OpenAI released ChatGPT, a language model that could sound uncannily natural, they suddenly burst into the mainstream. Now there are a few hundred people working full time to save the world from A.I. catastrophe. Some advise governments or corporations on their policies; some work on technical aspects of A.I. safety, approaching it as a set of complex math problems; Grace works at a kind of think tank that produces research on high-level questions, such as What roles will AI systems play in society? and Will they pursue goals? When theyre not lobbying in D.C. or meeting at an international conference, they often cross paths in places like Graces living room.
The rest of her guests arrived one by one: an authority on quantum computing; a former OpenAI researcher; the head of an institute that forecasts the future. Grace offered wine and beer, but most people opted for nonalcoholic canned drinks that defied easy description (a fermented energy drink, a hopped tea). They took their Impossible gyros to Graces sofa, where they talked until midnight. They were courteous, disagreeable, and surprisingly patient about reconsidering basic assumptions. You can condense the gist of the worry, seems to me, into a really simple two-step argument, Crowley said. Step one: Were building machines that might become vastly smarter than us. Step two: That seems pretty dangerous.
Are we sure, though? Josh Rosenberg, the C.E.O. of the Forecasting Research Institute, said. About intelligence per se being dangerous?
Grace noted that not all intelligent species are threatening: There are elephants, and yet mice still seem to be doing just fine.
Cartoon by Erika Sjule and Nate Odenkirk
Rabbits are certainly more intelligent than myxomatosis, Michael Nielsen, the quantum-computing expert, said.
Crowleys p(doom) was well above eighty per cent. The others, wary of committing to a number, deferred to Grace, who said that, given my deep confusion and uncertainty about thiswhich I think nearly everyone has, at least everyone whos being honest, she could only narrow her p(doom) to between ten and ninety per cent. Still, she went on, a ten-per-cent chance of human extinction is obviously, if you take it seriously, unacceptably high.
They agreed that, amid the thousands of reactions to ChatGPT, one of the most refreshingly candid assessments came from Snoop Dogg, during an onstage interview. Crowley pulled up the transcript and read aloud. This is not safe, cause the A.I.s got their own minds, and these motherfuckers are gonna start doing their own shit, Snoop said, paraphrasing an A.I.-safety argument. Shit, what the fuck? Crowley laughed. I have to admit, that captures the emotional tenor much better than my two-step argument, he said. And then, as if to justify the moment of levity, he read out another quote, this one from a 1948 essay by C.S. Lewis: If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human thingspraying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of dartsnot huddled together like frightened sheep.
Grace used to work for Eliezer Yudkowsky, a bearded guy with a fedora, a petulant demeanor, and a p(doom) of ninety-nine per cent. Raised in Chicago as an Orthodox Jew, he dropped out of school after eighth grade, taught himself calculus and atheism, started blogging, and, in the early two-thousands, made his way to the Bay Area. His best-known works include Harry Potter and the Methods of Rationality, a piece of fan fiction running to more than six hundred thousand words, and The Sequences, a gargantuan series of essays about how to sharpen ones thinking. The informal collective that grew up around these writingsfirst in the comments, then in the physical worldbecame known as the rationalist community, a small subculture devoted to avoiding the typical failure modes of human reason, often by arguing from first principles or quantifying potential risks. Nathan Young, a software engineer, told me, I remember hearing about Eliezer, who was known to be a heavy guy, onstage at some rationalist event, asking the crowd to predict if he could lose a bunch of weight. Then the big reveal: he unzips the fat suit he was wearing. Hed already lost the weight. I think his ostensible point was something about how its hard to predict the future, but mostly I remember thinking, What an absolute legend.
Yudkowsky was a transhumanist: human brains were going to be uploaded into digital brains during his lifetime, and this was great news. He told me recently that Eliezer ages sixteen through twenty assumed that A.I. was going to be great fun for everyone forever, and wanted it built as soon as possible. In 2000, he co-founded the Singularity Institute for Artificial Intelligence, to help hasten the A.I. revolution. Still, he decided to do some due diligence. I didnt see why an A.I. would kill everyone, but I felt compelled to systematically study the question, he said. When I did, I went, Oh, I guess I was wrong. He wrote detailed white papers about how A.I. might wipe us all out, but his warnings went unheeded. Eventually, he renamed his think tank the Machine Intelligence Research Institute, or MIRI.
The existential threat posed by A.I. had always been among the rationalists central issues, but it emerged as the dominant topic around 2015, following a rapid series of advances in machine learning. Some rationalists were in touch with Oxford philosophers, including Toby Ord and William MacAskill, the founders of the effective-altruism movement, which studied how to do the most good for humanity (and, by extension, how to avoid ending it). The boundaries between the movements increasingly blurred. Yudkowsky, Grace, and a few others flew around the world to E.A. conferences, where you could talk about A.I. risk without being laughed out of the room.
Philosophers of doom tend to get hung up on elaborate sci-fi-inflected hypotheticals. Grace introduced me to Joe Carlsmith, an Oxford-trained philosopher who had just published a paper about scheming AIs that might convince their human handlers theyre safe, then proceed to take over. He smiled bashfully as he expounded on a thought experiment in which a hypothetical person is forced to stack bricks in a desert for a million years. This can be a lot, I realize, he said. Yudkowsky argues that a superintelligent machine could come to see us as a threat, and decide to kill us (by commandeering existing autonomous weapons systems, say, or by building its own). Or our demise could happen in passing: you ask a supercomputer to improve its own processing speed, and it concludes that the best way to do this is to turn all nearby atoms into silicon, including those atoms that are currently people. But the basic A.I.-safety arguments do not require imagining that the current crop of Verizon chatbots will suddenly morph into Skynet, the digital supervillain from Terminator. To be dangerous, A.G.I. doesnt have to be sentient, or desire our destruction. If its objectives are at odds with human flourishing, even in subtle ways, then, say the doomers, were screwed.
See the rest here:
Among the A.I. Doomsayers - The New Yorker
- What We Learned From Big Tech's Earnings Reports - Investopedia [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Google Maps: It's getting a new generative AI feature - Mashable [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Amazon made an AI bot to talk you through buying more stuff on Amazon - The Verge [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI creates what Europeans think Americans from every state look like and it may hurt your feelings - UNILAD [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Samsung's Galaxy S24 Ultra Could Be Doing So Much More With AI - CNET [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Fact Sheet: Biden-Harris Administration Announces Key AI Actions Following President Bidens Landmark Executive ... - The White House [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Google Maps is getting supercharged with generative AI - The Verge [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI chatbots tend to choose violence and nuclear strikes in wargames - New Scientist [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Police Turn to AI to Review Bodycam Footage - ProPublica [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Apple Just Teased Its AI Plans. You Really Should Take Notice - CNET [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- In the AI science boom, beware: your results are only as good as your data - Nature.com [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Tim Cook confirms Apple's generative AI features are coming later this year - The Verge [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- I Tried Google Bard's New AI Image Generator. Here's How It Turned Out - CNET [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- 'Year of AI' Faculty Recruitment Initiative Aims to Bring More World-Class Professors to UT - The University of Texas at Austin [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI Learns Through the Eyes and Ears of a Child - New York University [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Amazon Introduces Rufus, an AI Shopping Tool, and Reports Earnings - The New York Times [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- I Tested a Next-Gen AI Assistant. It Will Blow You Away - WIRED [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI afterlife, robot romance, and slow-burn slashers: the best of Sundance 2024 - The Verge [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Is Jumping on the AI Bandwagon Prudent? - Catholic Exchange [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- This AI learnt language by seeing the world through a baby's eyes - Nature.com [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Arc Search's AI responses launched as an unfettered experience with no guardrails - Mashable [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Generative AI is hot, but predictive AI remains the workhorse - CIO [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Nvidia Stock Just Got Amazing Artificial Intelligence (AI) News From These Trillion-Dollar Tech Giants - The Motley Fool [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI Briefing: How Priceline and other e-commerce companies are approaching generative AI - Digiday [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- OpenAI Unveils A.I. That Instantly Generates Eye-Popping Videos - The New York Times [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Technology industry to combat deceptive use of AI in 2024 elections - Stories - Microsoft [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Meeting the moment: combating AI deepfakes in elections through today's new tech accord - Microsoft On the Issues - Microsoft [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- These Are the Jobs That AI Is Actually Replacing in 2024 - Tech.co [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- AI company developing software to detect hypersonic missiles from space - SpaceNews [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- How are AI Systems Assisting Architects and Designers? - ArchDaily [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Artificial intelligence is making critical health care decisions. The sheriff is MIA - POLITICO [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Google's Chess Experiments Reveal How to Boost the Power of AI - WIRED [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Why the only way to ride the company AI wave is experimentation - Big Think [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- What Are the Best AI Stocks in February 2024? Our Top 3 Picks - InvestorPlace [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Media Buying Briefing: Agencies' AI efforts lead to aliens and Whoppers - Digiday [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- C3.ai Stock Warning: Don't Get Carried Away With AI Euphoria! - InvestorPlace [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- The State of A.I., and Will Perplexity Beat Google or Destroy the Web? - The New York Times [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Donald Trump's father resurrected by AI to tell him he's 'a disgrace' - Euronews [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Google Cloud CEO On Huge Investments, AI And Challenges In 2024 - CRN [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Another Big Question About AI: Its Carbon Footprint Mother Jones - Mother Jones [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Reddit sells training data to unnamed AI company ahead of IPO - Ars Technica [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- ChatGPT Stock Predictions: 3 Artificial Intelligence Companies the AI Bot Thinks Have 10X Potential - InvestorPlace [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Chinese entrepreneurs express awe and fear of OpenAIs Sora video tool - South China Morning Post [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Google's AI Boss Says Scale Only Gets You So Far - WIRED [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- World's largest computer chip WSE-3 will power massive AI supercomputer 8 times faster than the current record-holder - Livescience.com [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Is generative AI truly making disinformation worse? - Euronews [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Your Kid May Already Be Watching AI-Generated Videos on YouTube - WIRED [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Free Legal Research Startup descrybe.ai Now Has AI Summaries of All State Supreme and Appellate Opinions - LawSites [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Google's new AI will play video games with you but not to win - The Verge [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Regulators Need AI Expertise. They Can't Afford It - WIRED [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- CBP wants to use AI to scan for fentanyl at the border - The Verge [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Rely on the Spirit when using AI, Elder Gong encourages - Church News [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Video Game Made Purely With AI Failed Because Tech Was 'Unable to Replace Talent' - IGN [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Self-docking spacecraft could be built with AI system similar to ChatGPT - Space.com [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- AI books are crowding the marketplace on Amazon - NPR [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Hackers can read private AI-assistant chats even though they're encrypted - Ars Technica [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- EU Presses Big Tech Companies on AI Threats - PYMNTS.com [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- AI fear and excitement are lucrative mix for online training industry - Marketplace [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Craig Martell, the Pentagon's first-ever Chief Digital and AI Officer, to depart in April - DefenseScoop [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Startup Interloom raises $3 million seed round to take on UiPath and RPA market - Fortune [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Forget Chatbots. AI Agents Are the Future - WIRED [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- SXSW audience boos AI sizzle reel - Quartz [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Look beyond Nvidia to ride the AI wave there are other potential winners, Fidelity says - CNBC [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Sony Pictures Will Cut Film Costs 'Using AI, Primarily' - IndieWire [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Anthropic's AI now lets you create bots to work for you - The Verge [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- VCs are selling shares of hot AI companies like Anthropic and xAI to small investors in a wild SPV market - TechCrunch [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Google Eats Rocks, a Win for A.I. Interpretability and Safety Vibe Check - The New York Times [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Lenovo and Cisco Announce Strategic Partnership to Simplify Path to AI Innovation - Cisco Newsroom [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- This Week in AI: Can we (and could we ever) trust OpenAI? - TechCrunch [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Microsoft AI screenshots everything you do on your computer and privacy experts are concerned - New York Post [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Tribeca to Screen AI-Generated Short Films Created by OpenAI's Sora - IndieWire [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Report: Apple and OpenAI have signed a deal to partner on AI - Ars Technica [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- An image calling for 'All Eyes on Rafah' is going viral. But it seems AI-generated. - The Washington Post [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Viral 'All Eyes on Rafah' Post Prompts More AI Images - TIME [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Research vs. development: Where is the moat in AI? - VentureBeat [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- All the Apple AI features we're expecting to be announced in iOS 18 - 9to5Mac [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Prediction: This "Magnificent Seven" Artificial Intelligence (AI) Stock Could Be a Better Investment Than Nvidia Over the ... - The Motley... [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Meta's AI is summarizing some bizarre Facebook comment sections - The Verge [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- ElevenLabs' AI generator makes explosions or other sound effects with just a prompt - The Verge [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Google defends AI search results after they told us to put glue on pizza - The Verge [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]