AGI in Less Than 5 years, Says Former OpenAI Employee – – 99Bitcoins

Posted: Published on June 10th, 2024

This post was added by Dr Simmons

Dig into the latest AI Crypto news as we explore the future of artificial general intelligence (AGI) and uncover its potential to surpass human abilities, according to Leopold Aschenbrenners essay.

AGI in less than 5 years. How do you intend to spend your last few years alive? (kidding).

The internets on fire after former OpenAI safety researcher Leopold Aschenbrenner unleashed Situational Awareness, a no-holds-barred essay series on the future of Artificial General Intelligence.

It is 165 pages long and fresh as of June 4. It examines where AI stands now and where its headed.

(Twitter)

In some ways, this is: LINE GOES UP OOOOOOOOOOH ITS HAPPENING ITS HAPPENING.

Reminiscent in some ways to this old Simpsons joke:

But Aschenbrenner envisions AGI systems becoming smarter than you or I by the decades end, ushering in an era of true superintelligence. Alongside this rapid advancement, he warns of significant national security implications not seen in decades.

AGI by 2027 is strikingly plausible, Aschenbrenner claims, suggesting that AGI machines will outperform college graduates by 2025 or 2026. To put this in perspective, suppose GPT-4 training took 3 months. In 2027, a leading AI lab will be able to train a GPT-4-level model in a minute.

Aschenbrenner urges the AI community to adopt what he terms AGI realism, a viewpoint grounded in three core principles related to national security and AI development in the U.S.

He argues that the industrys smartest minds, like Ilya Sutskever, who famously failed to unseat CEO Sam Altman in 2023, are converging on this perspective, acknowledging the imminent reality of AGI.

Aschenbrenners latest insights follow his controversial exit from OpenAI amid accusations of leaking info.

DISCOVER: The Best AI Crypto to Buy in Q2 2024

On Tuesday a dozen-plus staffers from AI heavyweights like OpenAI, Anthropic, and Googles DeepMind have raised red flags against AGI.

Their open letter cautions that without extra protections, AI might become an existential threat.

We believe in the potential of AI technology to deliver unprecedented benefits to humanity, the letter states. We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.

The letter takes aim at AI giants for dodging oversight in favor of fat profits. DeepMinds Neel Nanda was the only one to break ranks as the only internal researcher to endorse the letter.

AI is quickly becoming a battleground, but the message of the letter is simple: Dont punish employees for speaking out on AI dangers.

On the one hand, it can be scary to think that human creativity and the boundaries of thought are being closed in by politically correct code monkies tinkering with matrix multiplication.

On the other, the power of artificial intelligence is currently incomprehensible because it is unlike anything we have understood before.

It could be a revolution, just as the first man discovered the spark or the spinning of a stone wheel one moment, it didnt exist, and the next, it changed the face of humanity. Well see.

EXPLORE:A Complete List of Bitcoin-Friendly Countries

Disclaimer: Crypto is a high-risk asset class. This article is provided for informational purposes and does not constitute investment advice. You could lose all of your capital.

Read more:

AGI in Less Than 5 years, Says Former OpenAI Employee - - 99Bitcoins

Related Posts
This entry was posted in Artificial General Intelligence. Bookmark the permalink.

Comments are closed.