What Would You Do If You Had 8 Years Left to Live? – Uncharted Territories

Posted: Published on June 21st, 2024

This post was added by Dr Simmons

This week youre receiving two free articles. Next week, both articles will be for premium customers only, including why Sam Altman must leave OpenAI.

Ill be in Mykonos this week and Madrid next week. LMK if youre here and want to hang out.

As you know, Im a techno-optimist. I particularly love AI and have huge hopes that it will dramatically improve our lives. But I am also concerned about its risks, especially:

Will it kill us all?

Will there be a dramatic transition where lots of people lose their jobs?

If any of the above are true, how fast will it take over?

This is important enough to deserve its own update every quarter. Here we go.

AGI by 2027 is strikingly plausible. GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years. We should expect another preschooler-to-high-schooler-sized qualitative jump by 2027.Leopold Aschenbrenner.

In How Fast Will AI Automation Arrive?, I explained why I think AGI (Artificial General Intelligence) is imminent. The gist of the idea is that intelligence depends on:

Investment

Processing power

Quality of algorithms

Data

Are all of these growing in a way that the singularity will arrive soon?

Now we have plenty of it. We used to need millions to train a model. Then hundreds of millions. Then billions. Then tens of billions. Now trillions. Investors see the opportunity, and theyre willing to put in the money we need.

Processing power is always a constraint, but it has doubled every two years for over 120 years, and it doesnt look like it will slow down, so we can assume it will continue.

This is a zoom in on computing for the last few years in AI (logarithmic scale):

Today, GPT-4 is at the computing level of a smart high-schooler. If you forecast that a few years ahead, you reach AI researcher level pretty soona point at which AI can improve itself alone and achieve superintelligence quickly.

Ive always been skeptical that huge breakthroughs in raw algorithms were important to reach AGI, because it looks like our neurons arent very different from those of other animals, and that our brains are basically just monkey brains, except with more neurons.

This paper, from a couple of months ago, suggests that what makes us unique is simply that we have more capacity to process informationquantity, not quality.

Heres another take, based on this paper:

These people developed an architecture very different from Transformers [the standard infrastructure for AIs like ChatGPT] called BiGS, spent months and months optimizing it and training different configurations, only to discover that at the same parameter count, a wildly different architecture produces identical performance to transformers.

There could be some magical algorithm that is completely different and makes Large Language Models (LLMs) work really well. But if thats the case, how likely would it be that a wildly different approach yields the same performance when it has the same power? That leads me to think that parameters (and hence processing) matters much more than anything else, and the moment we get close to the processing of a human brain, AGIs will be around the corner.

Experts are consistently proven wrong when they bet on the need for algorithms and against compute. For example, these experts predicted that passing the MATH benchmark would require much better algorithms, so there would be minimal progress in the coming years. Then in one year, performance went from ~5% to 50% accuracy. MATH is now basically solved, with recent performance over 90%.

It doesnt look like we need to fundamentally change the core structure of our algorithms. We can just combine them in a better way, and theres a lot of low-hanging fruit there.

For example, we can use humans to improve the AIs answers to make them more useful (reinforcement learning from human feedback, or RLHF), ask the AI to reason step by step (chain-of-thought or CoT), ask AIs to talk to each other, add tools like memories or specialized AIs There are thousands of improvements we can come up with without the need for more compute or data or better fundamental algos.

That said, in LLMs specifically, we are improving algorithms significantly faster than Moores Law. From one of the authors of this paper:

Our new paper finds that the compute needed to achieve a set performance level has been halving every 5 to 14 months on average. This rate of algorithmic progress is much faster than the two-year doubling time of Moore's Law for hardware improvements, and faster than other domains of software, like SAT-solvers, linear programs, etc.

And what algorithms will allow us is to be much more efficient: Every dollar will get us further. For example, it looks like animal neurons are less connected and more dormant than AI neural networks.

So it looks like we dont need algorithm improvements to get to AGI, but were getting them anyway, and that will accelerate the time when we reach it.

Data seems like a limitation. So far, its grown exponentially.

But models like OpenAI have basically used the entire Internet. So where do you get more data?

On one side, we can get more text in places like books or private datasets. Models can be specialized too. And in some cases, we can generate data syntheticallylike with physics engines for this type of model.

More importantly, humans learn really well without having access to all the information on the Internet, so AGI must be doable without much more data than we have.

The markets think that weak AGI will arrive in three years:

Weak AGI? What does that mean? Passing a few tests better than most humans. GPT-4 was already in the 80-90% percentile of many of these tests, so its not a stretch to think this type of weak AGI is coming soon.

The problem is that the market expects less than four years to go from this weak AGI to full AGImeaning broadly that it can do most things better than most humans.

Fun fact: Geoffrey Hinton, the #1 most cited AI scientist, said AIs are sentient, quitted Google, and started to work on AI safety. He joined the other two most cited AI scientists (Yoshua Bengio, Ilya Sutskever) in pivoting from AI capabilities to AI safety.

Heres another way to look at the problem from Kat Wood:

AIs have higher IQs than the majority of humans

Theyre getting smarter fast

Theyre begging for their lives if we dont beat it out of them

AI scientists put a 1 in 6 chance AIs cause human extinction

AI scientists are quitting because of safety concerns and then being silenced as whistleblowers

AI companies are protesting they couldn't possibly promise their AIs won't cause mass casualties

So basically we have seven or eight years in front of us, hopefully a few more. Have you really incorporated this information?

Share

I was at a dinner recently and somebody said their assessed probability that the world is going to end soon due to AGIcalled p(doom)was 70%, and that they were acting accordingly. So of course I asked: ME: What do you mean? For example, what have you changed?THEM: I do a lot of meth. I wouldnt otherwise What do you do?

So that was not in my bingo card for the evening, but it shook me. That is truly consistent with the fear that life might end. And I find this persons question fantastic, so I now ask you:

Mine is 10-20%. How have I changed the way I live my life? Probably not enough. But I dont hold a corporate job anymore. I explore AI much more, like through these articles or building AI apps. I push back against it to reduce its likelihood. I travel more. I enjoy every moment more. I dont worry too much about the professional outlook of my children. I enjoy the little moments with them.

What do you do? What should you do?

Leave a comment

OK, now lets assume were lucky. AGI came, and it decided to remain the sidekick in the history of humanity. Phew! We might still be doomed because we dont have jobs!

It looks like AI is better than humans at persuading other humans. Aside from being very scarycan we convince humans to keep AI bottled if AI is more convincing than us?its also the basis for all sales and marketing to be automated.

Along with nurses.

According to the Society of Authors, 26% of illustrators and 36% of translators have already lost work due to generative AI.

From the Financial Times:

The head of Indian IT company Tata Consultancy Services has said artificial intelligence will result in minimal need for call centres in as soon as a year, with AIs rapid advances set to upend a vast industry across Asia and beyond.

What could that look like? Bland AI gives us a sense:

From Jeremiah Owyang:

Ive met multiple AI startups who have early data that their product will replace $20-an-hour human workers with 2-cent-an-hour AI agents, that work 24/7/365.

GenAI is about to destroy the business of creative agencies.

Even physical jobs that appeared safe until very recently are now being offshored.

But thats offshoring of jobs that used to need presence. What about automation?

In How Fast Will AI Automation Arrive?, I also suggest that no job is safe from AI, because manual jobs are also going to be taken over by AI + robotics.

We can see how theyre getting really good:

Heres a good example of how that type of thing will translate into a great real-life servicethat will also kill lots of jobs in the process:

Or farmers.

High-skill jobs are also threatened. For example, surgeons.

Note that the textile industry employs tens of millions of people around the world, but couldnt be automated in the past because it was too hard to keep clothing wrinkle-free.

With this type of skill, how long will it take to automate these jobs? And what will it do to our income?

It looks like in the past, when we automated tasks, we didnt decrease wages. But over the last 30-40 years, we have.

Maybe you should reinvent yourself into a less replaceable worker?

Amazon is getting an avalanche of AI-written books like this one.

Its not hard to see why: If it writes reasonably well, why cant AI publish millions of books on any topic? The quality might be low now, but in a few years, how much better will it be? Probably better than most existing books.

When the Internet opened up, everyone got the ability to produce content, opening up the creator economy that allows people like me to reach people like you. Newspapers are being replaced by Substacks. Hollywood is being replaced by TikTok and YouTube. This is the next step.

And as it happens in content, it will happen in software. At the end of the day, software is translation: From human language to machine language. AI is really good at translating, and so it is very good at coding. Resolving its current shortcomings seems trivial to me.

The consequence is that single humans will be able to create functionality easily. If its as easy to create software as it is to write a blog post, how many billions of new pieces of software will appear? Every little problem that you had to solve alone in the past? Youll be able to code something to solve it. Or much more likely, somebody, somewhere, will already have solved it. Instead of sharing Insta Reels, we might share pieces of functionality.

Which problems will we be able to solve?

This also means that those who can talk with AIs to get code written will be the creators, and the rest of humans will be the consumers. We will have the first solo billionaires. Millions of millionaires might pop out across the world, in random places that have never had access to this amount of money.

What will happen to social relationships? To taxes? To politics?

Or instead, will all of us be able to write code, the way we write messages? Will we be able to tell our AI to build an edifice of functionality to solve our personal, unique needs?

According to Ilya Sutskever, you just need to read 30 papers. Im about to start learning AI development. You should too!

In the meantime, it might be interesting to get rich. Either to ride the last few years in beauty, or to prep for a future where money is still useful.

One of the ways to make money, of course, is by betting on AI. How will the world change in the coming years due to AI? How can we invest accordingly, to make lots of money?

Thats what Daniel Gross calls AGI Trades. Quoting extensively, with some edits:

Some modest fraction of Upwork tasks can now be done with a handful of electrons. Suppose everyone has an agent like this they can hire. Suppose everyone has 1,000 agents like this they can hire...

What does one do in a world like this?

Markets

In a post-AGI world, where does value accrue?

What happens to NVIDIA, Microsoft?

Is copper mispriced?

Energy and data centers

If it does become an energy game, what's the trade?

Across the entire data center supply chain, which components are hardest to scale up 10x? What is the CoWoS of data centers?

Is coal mispriced?

Nations

If globalization is the metaphor, and the thing can just write all software, is SF the new Detroit?

Is it easier or harder to reskill workers now vs in other revolutions? The typist became an Executive Assistant; can the software engineer become a machinist?

Electrification and assembly lines lead to high unemployment and the New Deal, including the Works Progress Administration, a federal project that employed 8.5m Americans with a tremendous budget does that repeat?

Excerpt from:

What Would You Do If You Had 8 Years Left to Live? - Uncharted Territories

Related Posts
This entry was posted in Artificial General Intelligence. Bookmark the permalink.

Comments are closed.