This week youre receiving two free articles. Next week, both articles will be for premium customers only, including why Sam Altman must leave OpenAI.
Ill be in Mykonos this week and Madrid next week. LMK if youre here and want to hang out.
As you know, Im a techno-optimist. I particularly love AI and have huge hopes that it will dramatically improve our lives. But I am also concerned about its risks, especially:
Will it kill us all?
Will there be a dramatic transition where lots of people lose their jobs?
If any of the above are true, how fast will it take over?
This is important enough to deserve its own update every quarter. Here we go.
AGI by 2027 is strikingly plausible. GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years. We should expect another preschooler-to-high-schooler-sized qualitative jump by 2027.Leopold Aschenbrenner.
In How Fast Will AI Automation Arrive?, I explained why I think AGI (Artificial General Intelligence) is imminent. The gist of the idea is that intelligence depends on:
Investment
Processing power
Quality of algorithms
Data
Are all of these growing in a way that the singularity will arrive soon?
Now we have plenty of it. We used to need millions to train a model. Then hundreds of millions. Then billions. Then tens of billions. Now trillions. Investors see the opportunity, and theyre willing to put in the money we need.
Processing power is always a constraint, but it has doubled every two years for over 120 years, and it doesnt look like it will slow down, so we can assume it will continue.
This is a zoom in on computing for the last few years in AI (logarithmic scale):
Today, GPT-4 is at the computing level of a smart high-schooler. If you forecast that a few years ahead, you reach AI researcher level pretty soona point at which AI can improve itself alone and achieve superintelligence quickly.
Ive always been skeptical that huge breakthroughs in raw algorithms were important to reach AGI, because it looks like our neurons arent very different from those of other animals, and that our brains are basically just monkey brains, except with more neurons.
This paper, from a couple of months ago, suggests that what makes us unique is simply that we have more capacity to process informationquantity, not quality.
Heres another take, based on this paper:
These people developed an architecture very different from Transformers [the standard infrastructure for AIs like ChatGPT] called BiGS, spent months and months optimizing it and training different configurations, only to discover that at the same parameter count, a wildly different architecture produces identical performance to transformers.
There could be some magical algorithm that is completely different and makes Large Language Models (LLMs) work really well. But if thats the case, how likely would it be that a wildly different approach yields the same performance when it has the same power? That leads me to think that parameters (and hence processing) matters much more than anything else, and the moment we get close to the processing of a human brain, AGIs will be around the corner.
Experts are consistently proven wrong when they bet on the need for algorithms and against compute. For example, these experts predicted that passing the MATH benchmark would require much better algorithms, so there would be minimal progress in the coming years. Then in one year, performance went from ~5% to 50% accuracy. MATH is now basically solved, with recent performance over 90%.
It doesnt look like we need to fundamentally change the core structure of our algorithms. We can just combine them in a better way, and theres a lot of low-hanging fruit there.
For example, we can use humans to improve the AIs answers to make them more useful (reinforcement learning from human feedback, or RLHF), ask the AI to reason step by step (chain-of-thought or CoT), ask AIs to talk to each other, add tools like memories or specialized AIs There are thousands of improvements we can come up with without the need for more compute or data or better fundamental algos.
That said, in LLMs specifically, we are improving algorithms significantly faster than Moores Law. From one of the authors of this paper:
Our new paper finds that the compute needed to achieve a set performance level has been halving every 5 to 14 months on average. This rate of algorithmic progress is much faster than the two-year doubling time of Moore's Law for hardware improvements, and faster than other domains of software, like SAT-solvers, linear programs, etc.
And what algorithms will allow us is to be much more efficient: Every dollar will get us further. For example, it looks like animal neurons are less connected and more dormant than AI neural networks.
So it looks like we dont need algorithm improvements to get to AGI, but were getting them anyway, and that will accelerate the time when we reach it.
Data seems like a limitation. So far, its grown exponentially.
But models like OpenAI have basically used the entire Internet. So where do you get more data?
On one side, we can get more text in places like books or private datasets. Models can be specialized too. And in some cases, we can generate data syntheticallylike with physics engines for this type of model.
More importantly, humans learn really well without having access to all the information on the Internet, so AGI must be doable without much more data than we have.
The markets think that weak AGI will arrive in three years:
Weak AGI? What does that mean? Passing a few tests better than most humans. GPT-4 was already in the 80-90% percentile of many of these tests, so its not a stretch to think this type of weak AGI is coming soon.
The problem is that the market expects less than four years to go from this weak AGI to full AGImeaning broadly that it can do most things better than most humans.
Fun fact: Geoffrey Hinton, the #1 most cited AI scientist, said AIs are sentient, quitted Google, and started to work on AI safety. He joined the other two most cited AI scientists (Yoshua Bengio, Ilya Sutskever) in pivoting from AI capabilities to AI safety.
Heres another way to look at the problem from Kat Wood:
AIs have higher IQs than the majority of humans
Theyre getting smarter fast
Theyre begging for their lives if we dont beat it out of them
AI scientists put a 1 in 6 chance AIs cause human extinction
AI scientists are quitting because of safety concerns and then being silenced as whistleblowers
AI companies are protesting they couldn't possibly promise their AIs won't cause mass casualties
So basically we have seven or eight years in front of us, hopefully a few more. Have you really incorporated this information?
Share
I was at a dinner recently and somebody said their assessed probability that the world is going to end soon due to AGIcalled p(doom)was 70%, and that they were acting accordingly. So of course I asked: ME: What do you mean? For example, what have you changed?THEM: I do a lot of meth. I wouldnt otherwise What do you do?
So that was not in my bingo card for the evening, but it shook me. That is truly consistent with the fear that life might end. And I find this persons question fantastic, so I now ask you:
Mine is 10-20%. How have I changed the way I live my life? Probably not enough. But I dont hold a corporate job anymore. I explore AI much more, like through these articles or building AI apps. I push back against it to reduce its likelihood. I travel more. I enjoy every moment more. I dont worry too much about the professional outlook of my children. I enjoy the little moments with them.
What do you do? What should you do?
Leave a comment
OK, now lets assume were lucky. AGI came, and it decided to remain the sidekick in the history of humanity. Phew! We might still be doomed because we dont have jobs!
It looks like AI is better than humans at persuading other humans. Aside from being very scarycan we convince humans to keep AI bottled if AI is more convincing than us?its also the basis for all sales and marketing to be automated.
Along with nurses.
According to the Society of Authors, 26% of illustrators and 36% of translators have already lost work due to generative AI.
From the Financial Times:
The head of Indian IT company Tata Consultancy Services has said artificial intelligence will result in minimal need for call centres in as soon as a year, with AIs rapid advances set to upend a vast industry across Asia and beyond.
What could that look like? Bland AI gives us a sense:
From Jeremiah Owyang:
Ive met multiple AI startups who have early data that their product will replace $20-an-hour human workers with 2-cent-an-hour AI agents, that work 24/7/365.
GenAI is about to destroy the business of creative agencies.
Even physical jobs that appeared safe until very recently are now being offshored.
But thats offshoring of jobs that used to need presence. What about automation?
In How Fast Will AI Automation Arrive?, I also suggest that no job is safe from AI, because manual jobs are also going to be taken over by AI + robotics.
We can see how theyre getting really good:
Heres a good example of how that type of thing will translate into a great real-life servicethat will also kill lots of jobs in the process:
Or farmers.
High-skill jobs are also threatened. For example, surgeons.
Note that the textile industry employs tens of millions of people around the world, but couldnt be automated in the past because it was too hard to keep clothing wrinkle-free.
With this type of skill, how long will it take to automate these jobs? And what will it do to our income?
It looks like in the past, when we automated tasks, we didnt decrease wages. But over the last 30-40 years, we have.
Maybe you should reinvent yourself into a less replaceable worker?
Amazon is getting an avalanche of AI-written books like this one.
Its not hard to see why: If it writes reasonably well, why cant AI publish millions of books on any topic? The quality might be low now, but in a few years, how much better will it be? Probably better than most existing books.
When the Internet opened up, everyone got the ability to produce content, opening up the creator economy that allows people like me to reach people like you. Newspapers are being replaced by Substacks. Hollywood is being replaced by TikTok and YouTube. This is the next step.
And as it happens in content, it will happen in software. At the end of the day, software is translation: From human language to machine language. AI is really good at translating, and so it is very good at coding. Resolving its current shortcomings seems trivial to me.
The consequence is that single humans will be able to create functionality easily. If its as easy to create software as it is to write a blog post, how many billions of new pieces of software will appear? Every little problem that you had to solve alone in the past? Youll be able to code something to solve it. Or much more likely, somebody, somewhere, will already have solved it. Instead of sharing Insta Reels, we might share pieces of functionality.
Which problems will we be able to solve?
This also means that those who can talk with AIs to get code written will be the creators, and the rest of humans will be the consumers. We will have the first solo billionaires. Millions of millionaires might pop out across the world, in random places that have never had access to this amount of money.
What will happen to social relationships? To taxes? To politics?
Or instead, will all of us be able to write code, the way we write messages? Will we be able to tell our AI to build an edifice of functionality to solve our personal, unique needs?
According to Ilya Sutskever, you just need to read 30 papers. Im about to start learning AI development. You should too!
In the meantime, it might be interesting to get rich. Either to ride the last few years in beauty, or to prep for a future where money is still useful.
One of the ways to make money, of course, is by betting on AI. How will the world change in the coming years due to AI? How can we invest accordingly, to make lots of money?
Thats what Daniel Gross calls AGI Trades. Quoting extensively, with some edits:
Some modest fraction of Upwork tasks can now be done with a handful of electrons. Suppose everyone has an agent like this they can hire. Suppose everyone has 1,000 agents like this they can hire...
What does one do in a world like this?
Markets
In a post-AGI world, where does value accrue?
What happens to NVIDIA, Microsoft?
Is copper mispriced?
Energy and data centers
If it does become an energy game, what's the trade?
Across the entire data center supply chain, which components are hardest to scale up 10x? What is the CoWoS of data centers?
Is coal mispriced?
Nations
If globalization is the metaphor, and the thing can just write all software, is SF the new Detroit?
Is it easier or harder to reskill workers now vs in other revolutions? The typist became an Executive Assistant; can the software engineer become a machinist?
Electrification and assembly lines lead to high unemployment and the New Deal, including the Works Progress Administration, a federal project that employed 8.5m Americans with a tremendous budget does that repeat?
Excerpt from:
What Would You Do If You Had 8 Years Left to Live? - Uncharted Territories
- 12 Days: Pole Attachment Changes in 2023 Set Stage for BEAD Implementation - BroadbandBreakfast.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 - Medium [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- To Accelerate or Decelerate AI: That is the Question - Quillette [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Its critical to regulate AI within the multi-trillion-dollar API economy - TechCrunch [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- MIT Sloan Executive Education Launches New Course Offerings Focused on AI Business Strategies - AiThority [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- The Era of AI: 2023's Landmark Year - CMSWire [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- AGI Will Not Take All Our Jobs. But it will change all our jobs | by Russell Lim | Predict | Dec, 2023 - Medium [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Policy makers should plan for superintelligent AI, even if it never happens - Bulletin of the Atomic Scientists [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- 4 reasons why this AI Godfather thinks we shouldn't be afraid - TechRadar [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- AI consciousness: scientists say we urgently need answers - Nature.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- On the Eve of An A.I. Extinction Risk'? In 2023, Advancements in A.I. Signaled Promise, and Prompted Warnings from ... - The Debrief [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Superintelligence Unleashed: Navigating the Perils and Promises of Tomorrow's AI Landscape with - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Stand with beneficial AGI | Sharon Gal Or | The Blogs - The Times of Israel [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Top 5 Myths About AI Debunked. Unraveling the Truth: Separating AI | by Michiel Meire | Dec, 2023 - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- What Is Artificial Intelligence (AI)? - Council on Foreign Relations [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- AGI predictions for 2024. The major LLM players, as well as | by Paul Pallaghy, PhD | Dec, 2023 - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Meta's long-term vision for AGI: 600,000 x NVIDIA H100-equivalent AI GPUs for future of AI - TweakTown [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Chrome Will Use Experimental AI Feature to Organize Tabs, Write Reviews - CNET [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Meta is joining its two AI teams in pursuit of open-source artificial general intelligence - MediaNama.com [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Tech CEOs Agree Human-Like AI Is Inevitable. Just Don't Ask Them What That Means - The Messenger [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Dream On, Mark Zuckerberg. Your New AI Bet Is a Real Long Shot - CNET [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Demystifying AI: The ABCs of Effective Integration - PYMNTS.com [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Artificial Intelligence & The Enigma Of Singularity - Welcome2TheBronx [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Former Tesla FSD Leader, Karpathy, Offers Glimpse into the Future of Artificial General Intelligence - Not a Tesla App [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Dembski: Does the Squawk Around AI Sound Like the Tower of Babel? - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI child unveiled by award-winning Chinese scientist in Beijing - South China Morning Post [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- I Asked GPT4 To Predict The Timeline Of Breakthrough Advances In Superintelligence Until The Year - Medium [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AGI: A Big Deal(Baby) That Needs Our Attention - Medium [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Human Intelligence Is Fundamentally Different From Machine Intelligence - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- What is Artificial General Intelligence (AGI) and Why Should You Care? - AutoGPT Official - AutoGPT [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Artificial General Intelligence: Unleashing a New Era of Accessibility for People with Disabilities - Medriva [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Scholar Creates AI-Powered Simulated Child - Futurism [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AGI: Machines vs. Organisms - Discovery Institute [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Meta's AI boss says LLMs not enough: 'Human level AI is not just around the corner' - Cointelegraph [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Understanding the Intersection between AI and Intelligence: Debates, Capabilities, and Ethical Implications - Medriva [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Future of Artificial Intelligence: Predictions and Impact on Society - Medriva [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Amazon AGI Team Say Their AI Is Showing "Emergent Abilities" - Futurism [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- There's AI, and Then There's AGI: What You Need to Know to Tell the Difference - CNET [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Will Artificial Intelligence Increase Unemployment? No. - Reason [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- The Babelian Tower Of AI Alignment - NOEMA - Noema Magazine [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Elon Musk's xAI Close to Raising $6 Billion - PYMNTS.com [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Learning Before Legislating in Texas' AI Advisory Council - dallasinnovates.com [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- The Future of Generative AI: Trends, Challenges, & Breakthroughs - eWeek [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Drink the Kool-Aid all you want, but don't call AI an existential threat - Bulletin of the Atomic Scientists [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- What Ever Happened to the AI Apocalypse? - New York Magazine [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- What aren't the OpenAI whistleblowers saying? - Platformer [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Former OpenAI researcher foresees AGI reality in 2027 - Cointelegraph [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- What is Artificial Intelligence and Why It Matters in 2024? - Simplilearn [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Employees claim OpenAI, Google ignoring risks of AI and should give them 'right to warn' public - New York Post [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? - The New York Times [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- AGI in Less Than 5 years, Says Former OpenAI Employee - - 99Bitcoins [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Former OpenAI Employee Says Company Had Plan to Start AGI Bidding War With China and Russia - Futurism [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- OpenAI and Googles AI systems are powerful. Where are they taking us? - Vox.com [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Exploring Safe Superintelligence: Protecting Humanity in the Age of AI - PUNE.NEWS - PUNE.NEWS [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- The role of artificial intelligence in modern society - Business Insider Africa [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Bill Gates Says A.I.'s Impact on the Legal System Could 'Change Justice' - Observer [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Generative AI | potential in attractions industry - blooloop [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- What's the worst that AI can do to our industry (and can we prevent it)? - Unmade: media and marketing analysis [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Hottest AI Use Case For Advisors: Note-Taking Apps That Generate Action Items - Family Wealth Report [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Artificial General Intelligence, Shrinkflation and Snackable are Some of the Many New, Revised, or Updated Words and ... - LJ INFOdocket [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- A Reality Check on Superhuman AI - Nautilus - Nautilus [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Is It Sinking In? Chatbots Will *Not* Soon Think Like Humans - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- AGI: The New Fire? Demis Hassabis Predicts AI Revolution - Wall Street Pit [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- ChatGPT Is 4 Levels Away From AGI In OpenAI's Scale - Dataconomy [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- Skild AI grabs $300M to build foundation model for robotics - Robot Report [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- 3 reasons why Artificial General Intelligence is a hallucination - Techpoint Africa [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- OpenAI is plagued by safety concerns - The Verge [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]