Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Thats a quote from Leopold Aschenbrenner, a San Francisco-based AI researcher in his mid 20s who was recently fired from OpenAI and who, according to his own website, recently founded an investment firm focused on artificial general intelligence. Aschenbrenner, a former economics researcher at Oxford Universitys Global Priorities Institute, believes that artificial superintelligence is just around the corner and has written a 165-page essay explaining why. I spent the last weekend reading the essay, Situational Awareness: The Decade Ahead, and I now understand a lot better what is going on, if not in AI, then at least in San Francisco, sanctuary of tech visionaries.
Let me first give you the gist of his argument. Aschenbrenner says that current AI systems are scaling up incredibly quickly. The most relevant factors that contribute to the growth of AI performance at the moment are the increase of computing clusters and improvements of the algorithms. Neither of these factors is yet remotely saturated. Thats why, he says, performance will continue to improve exponentially for at least several more years, and that is sufficient for AI to exceed human intelligence on pretty much all tasks by 2027. We would then have artificial general intelligence (AGI)according to Aschenbrenner.
I agree that it wont be long now until AI outsmarts humans.
It is maybe not so surprising that someone who makes money from AGI coming up soon argues that it will come up soon. Nevertheless, his argument is worth considering. He predicts that a significant contribution to this trend will be what he calls unhobbling of the AI models. By this he means that current AIs have limitations that can easily be overcome and will soon be overcome. For example, a lack of memory, or that they cant themselves use computing tools. Algorithms are also likely to develop away from large language models soon to more efficient learning methods. (Aschenbrenner doesnt mention it, but personally I think a big game changer will be symbolic reasoning, as good reasoning is basically logic, and we need more of it.)
So far, I agree with Aschenbrenner. I think hes right that it wont be long now until AI outsmarts humans. I believe this not so much because I think AI are smart but because we are not. The human brain is not a good thinking machineI speak from personal experience: Its slow and makes constant mistakes. Just speeding up human thought and avoiding faulty conclusions will have a dramatic impact on the world.
I also agree that soon after this, artificial intelligence will be able to research itself and to improve its own algorithms. Where I get off the bus is when Aschenbrenner concludes that this will lead to the intelligence explosionformerly known as the technological singularityaccompanied by extremely rapid progress in science and technology and society overall. The reason I dont believe this is going to happen is that Aschenbrenner underestimates the two major limiting factors: energy and data.
Let us first look at what he says about energy limitations. The training of AI models in terms of computing operations takes up an enormous amount of energy. According to Aschenbrenner, by 2028 the most advanced models will run on 10 gigawatts of power at a cost of several hundred billion dollars. By 2030, theyll run at 100 gigawatts at a cost of a trillion dollars.
For context, a typical power plant delivers something in the range of 1 gigawatt or so. So that means building 10 power plants in addition to the supercomputer cluster by 2028. What would all those power stations run on? According to Aschenbrenner, on natural gas. Even the 100 [gigawatt] cluster is surprisingly doable, he writes, because that would take only about 1,200 or so new natural gas wells. And if that doesnt work, I guess they can just go the Sam Altman-way and switch to nuclear fusion power.
Then theres the data. Currently the most common AIslarge language models like GPT and Metas Llamahave already been trained on much of the data that is available online. Absorbing the likes of Wikipedia and Google Books was the easy part. Getting new data is much harder. Of course, there is new data on the Internet every day, but thats not substantial compared to what is already there, and its for the most part not information that AIs will need to become betterits just the information they need to be up to date. They need data of the kind that is, for example, stored in peoples brains, what philosophers call tacit knowledge. You need properties of physical objects that you cant extract from video footage. So, even if your algorithms get better and learn faster all the time, a computer cant learn from what isnt there.
Frontier research tends to overestimate the pace at which the world can be changed.
No problem, Aschenbrenner says. You deploy robots who collect novel real-world data. Where do you get those robots from? Well, Aschenbrenner thinks that AIs will solve robotics, meaning presumably any remaining robot problems (like recognizing objects in any environment, and performing all manner of tasks successfully without human intervention). And the first AI-created robots will build factories to build more of these robots. Robo-factories could produce more robo-factories in an unconstrained way, leading to an industrial explosion, he writes. Think: self-replicating robot factories quickly covering all of the Nevada desert.
Alright. But what will they build the factories with, I wonder: Resources that will be mined and transported bylet me guessmore robots? Perhaps those will be built in the factories constructed from the resources mined by the robots. Do you see the problem?
What Aschenbrenner misses is that creating 100-gigawatt supercomputing clusters or huge robot workforces will not just require AGI. It will require changing the entire world economy, the products and services that it provides. You cant ramp up the production of one high-end product without also ramping up the production of all components that contribute to it. It requires physical changes, stuff that needs to be moved, plans that need to be approved, people who have to do things. And everything that needs to be done by people is very slow. Theres a reason why CERN spent $20 million and took years just on a plan to build their next bigger collider before even doing anything.
Unlike the Large Hadron Collider, which is 17 miles in circumference, a 100-gigawatt supercomputing cluster itself probably wouldnt be all that largein fact, you want to keep it compact because the larger it gets, the more you have to transport data around. But the size of the plant that would power the 100-gigawatt supercomputing cluster depends strongly on the energy you use to supply it. Natural gas power plants tend to be relatively small, while nuclear power tends to take up more real estate (because of safety requirements). Wind and solar power farms take up even more terrain. Nuclear fusion is inherently a compact energy source, but since we dont have any working fusion power stations, how little space it would take up is anyones guess.
Leaving aside that climate change is about to crush the world economy, the robot revolution will happen, eventually, but not within a couple of years. Itll take decades at best. One must have spent a lot of time group-thinking in San Francisco and Oxford to lose touch with the real world so much that one can seriously think its possible to build a 100-gigawatt supercomputing cluster and a robot work force within six years.
That said, I think Aschenbrenner is right that AGI will almost certainly be able to unlock huge progress in science and technology. This is simply because a lot of scientific knowledge currently goes to waste just because no human can read everything thats been published in the scientific literature. But AGI will. There must be lots of insights hidden in the scientific literature that can be unearthed without doing any new research whatsoever.
For example, it could find new drugs by understanding that a compound which was previously unsuccessful in treating one illness might be good for treating another. It could see that a thorny mathematical problem in one area of science was previously solved in another. It might find correlations in data that no one ever thought of looking for, maybe settling the debate of whether dark matter is real or finding evidence for new physics. If I had a few billionaire friends, thats what Id tell them to spend their bucks on.
The second half of Aschenbrenners essay is dedicated to the security risks that will go along with AGI, and I largely agree with him.
Most people on this planet, including all governments, currently dramatically underestimate just how big an impact AGI will make, and how much power a superintelligence will give to anyone in possession of it. If they appreciated its future impact, they would not let private companies develop them, basically unrestricted. Once they wake up, governments will rapidly try to gain control of whatever AGI they can get their hands on and put severe limitations on its use.
Let me stress: Its not that I think governments restricting AI-research is good, or that I want this to happenI merely think this is what will happen. For practical purposes, the quasi-nationalization of AI will probably mean that high-compute queries, like overthrowing the United States government, will require security clearance.
Aschenbrenner also discusses the super-alignment problemthat it will be basically impossible to make sure an intelligence that is vastly superior to our own will align with our values. While I agree that this is a serious problem that requires consideration, I think its not the most urgent problem right now. Before we worry about superintelligent AI trying to rule the world itself, we need to worry about humans trying to abuse it to rule the world.
What can we extrapolate from a trend of wrong predictions? In 1960, Herbert Simon, a Nobel Prize and Turing Award winner, speculated that machines will be capable, within 20 years, of doing any work a man can do. In the 1970s, cognitive scientist Marvin Minsky predicted that human-level machine intelligence was just a few years away. In a 1993 essay, computer scientist Vernor Vinge predicted that the technological singularity would occur within 30 years.
What I take away from this list of failed predictions is that people involved in frontier research tend to vastly overestimate the pace at which the world can be changed. I wish that we actually lived in the world that Aschenbrenner seems to think we live in. I cant wait for superhuman intelligence. But Im afraid the intelligence explosion isnt as near as he thinks.
Lead image: Mykola Holyutyak / Shutterstock
Posted on June 20, 2024
Sabine Hossenfelder is a theoretical physicist at the Munich Center for Mathematical Philosophy, in Germany, focusing on modifications of general relativity, phenomenological quantum gravity, and the foundations of quantum mechanics. She is the creative director of the YouTube channel Science without the gobbledygook where she talks about recent scientific developments and debunks hype. Her latest book is Existential Physics: A Scientists Guide to Lifes Biggest Questions. Follow her on X (formerly known as Twitter) @skdh.
Cutting-edge science, unraveled by the very brightest living thinkers.
Original post:
A Reality Check on Superhuman AI - Nautilus - Nautilus
- 12 Days: Pole Attachment Changes in 2023 Set Stage for BEAD Implementation - BroadbandBreakfast.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 - Medium [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- To Accelerate or Decelerate AI: That is the Question - Quillette [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Its critical to regulate AI within the multi-trillion-dollar API economy - TechCrunch [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- MIT Sloan Executive Education Launches New Course Offerings Focused on AI Business Strategies - AiThority [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- The Era of AI: 2023's Landmark Year - CMSWire [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- AGI Will Not Take All Our Jobs. But it will change all our jobs | by Russell Lim | Predict | Dec, 2023 - Medium [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Policy makers should plan for superintelligent AI, even if it never happens - Bulletin of the Atomic Scientists [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- 4 reasons why this AI Godfather thinks we shouldn't be afraid - TechRadar [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- AI consciousness: scientists say we urgently need answers - Nature.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- On the Eve of An A.I. Extinction Risk'? In 2023, Advancements in A.I. Signaled Promise, and Prompted Warnings from ... - The Debrief [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Superintelligence Unleashed: Navigating the Perils and Promises of Tomorrow's AI Landscape with - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Stand with beneficial AGI | Sharon Gal Or | The Blogs - The Times of Israel [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Top 5 Myths About AI Debunked. Unraveling the Truth: Separating AI | by Michiel Meire | Dec, 2023 - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- What Is Artificial Intelligence (AI)? - Council on Foreign Relations [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- AGI predictions for 2024. The major LLM players, as well as | by Paul Pallaghy, PhD | Dec, 2023 - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Meta's long-term vision for AGI: 600,000 x NVIDIA H100-equivalent AI GPUs for future of AI - TweakTown [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Chrome Will Use Experimental AI Feature to Organize Tabs, Write Reviews - CNET [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Meta is joining its two AI teams in pursuit of open-source artificial general intelligence - MediaNama.com [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Tech CEOs Agree Human-Like AI Is Inevitable. Just Don't Ask Them What That Means - The Messenger [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Dream On, Mark Zuckerberg. Your New AI Bet Is a Real Long Shot - CNET [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Demystifying AI: The ABCs of Effective Integration - PYMNTS.com [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Artificial Intelligence & The Enigma Of Singularity - Welcome2TheBronx [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Former Tesla FSD Leader, Karpathy, Offers Glimpse into the Future of Artificial General Intelligence - Not a Tesla App [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Dembski: Does the Squawk Around AI Sound Like the Tower of Babel? - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI child unveiled by award-winning Chinese scientist in Beijing - South China Morning Post [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- I Asked GPT4 To Predict The Timeline Of Breakthrough Advances In Superintelligence Until The Year - Medium [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AGI: A Big Deal(Baby) That Needs Our Attention - Medium [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Human Intelligence Is Fundamentally Different From Machine Intelligence - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- What is Artificial General Intelligence (AGI) and Why Should You Care? - AutoGPT Official - AutoGPT [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Artificial General Intelligence: Unleashing a New Era of Accessibility for People with Disabilities - Medriva [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Scholar Creates AI-Powered Simulated Child - Futurism [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AGI: Machines vs. Organisms - Discovery Institute [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Meta's AI boss says LLMs not enough: 'Human level AI is not just around the corner' - Cointelegraph [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Understanding the Intersection between AI and Intelligence: Debates, Capabilities, and Ethical Implications - Medriva [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Future of Artificial Intelligence: Predictions and Impact on Society - Medriva [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Amazon AGI Team Say Their AI Is Showing "Emergent Abilities" - Futurism [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- There's AI, and Then There's AGI: What You Need to Know to Tell the Difference - CNET [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Will Artificial Intelligence Increase Unemployment? No. - Reason [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- The Babelian Tower Of AI Alignment - NOEMA - Noema Magazine [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Elon Musk's xAI Close to Raising $6 Billion - PYMNTS.com [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Learning Before Legislating in Texas' AI Advisory Council - dallasinnovates.com [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- The Future of Generative AI: Trends, Challenges, & Breakthroughs - eWeek [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Drink the Kool-Aid all you want, but don't call AI an existential threat - Bulletin of the Atomic Scientists [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- What Ever Happened to the AI Apocalypse? - New York Magazine [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- What aren't the OpenAI whistleblowers saying? - Platformer [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Former OpenAI researcher foresees AGI reality in 2027 - Cointelegraph [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- What is Artificial Intelligence and Why It Matters in 2024? - Simplilearn [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Employees claim OpenAI, Google ignoring risks of AI and should give them 'right to warn' public - New York Post [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? - The New York Times [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- AGI in Less Than 5 years, Says Former OpenAI Employee - - 99Bitcoins [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Former OpenAI Employee Says Company Had Plan to Start AGI Bidding War With China and Russia - Futurism [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- OpenAI and Googles AI systems are powerful. Where are they taking us? - Vox.com [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Exploring Safe Superintelligence: Protecting Humanity in the Age of AI - PUNE.NEWS - PUNE.NEWS [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- The role of artificial intelligence in modern society - Business Insider Africa [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Bill Gates Says A.I.'s Impact on the Legal System Could 'Change Justice' - Observer [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Generative AI | potential in attractions industry - blooloop [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- What's the worst that AI can do to our industry (and can we prevent it)? - Unmade: media and marketing analysis [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Hottest AI Use Case For Advisors: Note-Taking Apps That Generate Action Items - Family Wealth Report [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Artificial General Intelligence, Shrinkflation and Snackable are Some of the Many New, Revised, or Updated Words and ... - LJ INFOdocket [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- What Would You Do If You Had 8 Years Left to Live? - Uncharted Territories [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Is It Sinking In? Chatbots Will *Not* Soon Think Like Humans - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- AGI: The New Fire? Demis Hassabis Predicts AI Revolution - Wall Street Pit [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- ChatGPT Is 4 Levels Away From AGI In OpenAI's Scale - Dataconomy [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- Skild AI grabs $300M to build foundation model for robotics - Robot Report [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- 3 reasons why Artificial General Intelligence is a hallucination - Techpoint Africa [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- OpenAI is plagued by safety concerns - The Verge [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]