A Reality Check on Superhuman AI – Nautilus – Nautilus

Posted: Published on June 21st, 2024

This post was added by Dr Simmons

Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Thats a quote from Leopold Aschenbrenner, a San Francisco-based AI researcher in his mid 20s who was recently fired from OpenAI and who, according to his own website, recently founded an investment firm focused on artificial general intelligence. Aschenbrenner, a former economics researcher at Oxford Universitys Global Priorities Institute, believes that artificial superintelligence is just around the corner and has written a 165-page essay explaining why. I spent the last weekend reading the essay, Situational Awareness: The Decade Ahead, and I now understand a lot better what is going on, if not in AI, then at least in San Francisco, sanctuary of tech visionaries.

Let me first give you the gist of his argument. Aschenbrenner says that current AI systems are scaling up incredibly quickly. The most relevant factors that contribute to the growth of AI performance at the moment are the increase of computing clusters and improvements of the algorithms. Neither of these factors is yet remotely saturated. Thats why, he says, performance will continue to improve exponentially for at least several more years, and that is sufficient for AI to exceed human intelligence on pretty much all tasks by 2027. We would then have artificial general intelligence (AGI)according to Aschenbrenner.

I agree that it wont be long now until AI outsmarts humans.

It is maybe not so surprising that someone who makes money from AGI coming up soon argues that it will come up soon. Nevertheless, his argument is worth considering. He predicts that a significant contribution to this trend will be what he calls unhobbling of the AI models. By this he means that current AIs have limitations that can easily be overcome and will soon be overcome. For example, a lack of memory, or that they cant themselves use computing tools. Algorithms are also likely to develop away from large language models soon to more efficient learning methods. (Aschenbrenner doesnt mention it, but personally I think a big game changer will be symbolic reasoning, as good reasoning is basically logic, and we need more of it.)

So far, I agree with Aschenbrenner. I think hes right that it wont be long now until AI outsmarts humans. I believe this not so much because I think AI are smart but because we are not. The human brain is not a good thinking machineI speak from personal experience: Its slow and makes constant mistakes. Just speeding up human thought and avoiding faulty conclusions will have a dramatic impact on the world.

I also agree that soon after this, artificial intelligence will be able to research itself and to improve its own algorithms. Where I get off the bus is when Aschenbrenner concludes that this will lead to the intelligence explosionformerly known as the technological singularityaccompanied by extremely rapid progress in science and technology and society overall. The reason I dont believe this is going to happen is that Aschenbrenner underestimates the two major limiting factors: energy and data.

Let us first look at what he says about energy limitations. The training of AI models in terms of computing operations takes up an enormous amount of energy. According to Aschenbrenner, by 2028 the most advanced models will run on 10 gigawatts of power at a cost of several hundred billion dollars. By 2030, theyll run at 100 gigawatts at a cost of a trillion dollars.

For context, a typical power plant delivers something in the range of 1 gigawatt or so. So that means building 10 power plants in addition to the supercomputer cluster by 2028. What would all those power stations run on? According to Aschenbrenner, on natural gas. Even the 100 [gigawatt] cluster is surprisingly doable, he writes, because that would take only about 1,200 or so new natural gas wells. And if that doesnt work, I guess they can just go the Sam Altman-way and switch to nuclear fusion power.

Then theres the data. Currently the most common AIslarge language models like GPT and Metas Llamahave already been trained on much of the data that is available online. Absorbing the likes of Wikipedia and Google Books was the easy part. Getting new data is much harder. Of course, there is new data on the Internet every day, but thats not substantial compared to what is already there, and its for the most part not information that AIs will need to become betterits just the information they need to be up to date. They need data of the kind that is, for example, stored in peoples brains, what philosophers call tacit knowledge. You need properties of physical objects that you cant extract from video footage. So, even if your algorithms get better and learn faster all the time, a computer cant learn from what isnt there.

Frontier research tends to overestimate the pace at which the world can be changed.

No problem, Aschenbrenner says. You deploy robots who collect novel real-world data. Where do you get those robots from? Well, Aschenbrenner thinks that AIs will solve robotics, meaning presumably any remaining robot problems (like recognizing objects in any environment, and performing all manner of tasks successfully without human intervention). And the first AI-created robots will build factories to build more of these robots. Robo-factories could produce more robo-factories in an unconstrained way, leading to an industrial explosion, he writes. Think: self-replicating robot factories quickly covering all of the Nevada desert.

Alright. But what will they build the factories with, I wonder: Resources that will be mined and transported bylet me guessmore robots? Perhaps those will be built in the factories constructed from the resources mined by the robots. Do you see the problem?

What Aschenbrenner misses is that creating 100-gigawatt supercomputing clusters or huge robot workforces will not just require AGI. It will require changing the entire world economy, the products and services that it provides. You cant ramp up the production of one high-end product without also ramping up the production of all components that contribute to it. It requires physical changes, stuff that needs to be moved, plans that need to be approved, people who have to do things. And everything that needs to be done by people is very slow. Theres a reason why CERN spent $20 million and took years just on a plan to build their next bigger collider before even doing anything.

Unlike the Large Hadron Collider, which is 17 miles in circumference, a 100-gigawatt supercomputing cluster itself probably wouldnt be all that largein fact, you want to keep it compact because the larger it gets, the more you have to transport data around. But the size of the plant that would power the 100-gigawatt supercomputing cluster depends strongly on the energy you use to supply it. Natural gas power plants tend to be relatively small, while nuclear power tends to take up more real estate (because of safety requirements). Wind and solar power farms take up even more terrain. Nuclear fusion is inherently a compact energy source, but since we dont have any working fusion power stations, how little space it would take up is anyones guess.

Leaving aside that climate change is about to crush the world economy, the robot revolution will happen, eventually, but not within a couple of years. Itll take decades at best. One must have spent a lot of time group-thinking in San Francisco and Oxford to lose touch with the real world so much that one can seriously think its possible to build a 100-gigawatt supercomputing cluster and a robot work force within six years.

That said, I think Aschenbrenner is right that AGI will almost certainly be able to unlock huge progress in science and technology. This is simply because a lot of scientific knowledge currently goes to waste just because no human can read everything thats been published in the scientific literature. But AGI will. There must be lots of insights hidden in the scientific literature that can be unearthed without doing any new research whatsoever.

For example, it could find new drugs by understanding that a compound which was previously unsuccessful in treating one illness might be good for treating another. It could see that a thorny mathematical problem in one area of science was previously solved in another. It might find correlations in data that no one ever thought of looking for, maybe settling the debate of whether dark matter is real or finding evidence for new physics. If I had a few billionaire friends, thats what Id tell them to spend their bucks on.

The second half of Aschenbrenners essay is dedicated to the security risks that will go along with AGI, and I largely agree with him.

Most people on this planet, including all governments, currently dramatically underestimate just how big an impact AGI will make, and how much power a superintelligence will give to anyone in possession of it. If they appreciated its future impact, they would not let private companies develop them, basically unrestricted. Once they wake up, governments will rapidly try to gain control of whatever AGI they can get their hands on and put severe limitations on its use.

Let me stress: Its not that I think governments restricting AI-research is good, or that I want this to happenI merely think this is what will happen. For practical purposes, the quasi-nationalization of AI will probably mean that high-compute queries, like overthrowing the United States government, will require security clearance.

Aschenbrenner also discusses the super-alignment problemthat it will be basically impossible to make sure an intelligence that is vastly superior to our own will align with our values. While I agree that this is a serious problem that requires consideration, I think its not the most urgent problem right now. Before we worry about superintelligent AI trying to rule the world itself, we need to worry about humans trying to abuse it to rule the world.

What can we extrapolate from a trend of wrong predictions? In 1960, Herbert Simon, a Nobel Prize and Turing Award winner, speculated that machines will be capable, within 20 years, of doing any work a man can do. In the 1970s, cognitive scientist Marvin Minsky predicted that human-level machine intelligence was just a few years away. In a 1993 essay, computer scientist Vernor Vinge predicted that the technological singularity would occur within 30 years.

What I take away from this list of failed predictions is that people involved in frontier research tend to vastly overestimate the pace at which the world can be changed. I wish that we actually lived in the world that Aschenbrenner seems to think we live in. I cant wait for superhuman intelligence. But Im afraid the intelligence explosion isnt as near as he thinks.

Lead image: Mykola Holyutyak / Shutterstock

Posted on June 20, 2024

Sabine Hossenfelder is a theoretical physicist at the Munich Center for Mathematical Philosophy, in Germany, focusing on modifications of general relativity, phenomenological quantum gravity, and the foundations of quantum mechanics. She is the creative director of the YouTube channel Science without the gobbledygook where she talks about recent scientific developments and debunks hype. Her latest book is Existential Physics: A Scientists Guide to Lifes Biggest Questions. Follow her on X (formerly known as Twitter) @skdh.

Cutting-edge science, unraveled by the very brightest living thinkers.

Original post:

A Reality Check on Superhuman AI - Nautilus - Nautilus

Related Posts
This entry was posted in Artificial General Intelligence. Bookmark the permalink.

Comments are closed.