When you have them by the stock options, their hearts and minds will follow. This, it seems, is the moral of the OpenAI leadership story, the full truth of which remains shrouded in non-disclosure agreements and corporate confidentiality, dimly illuminated by unattributed comments.
It began with a blindside worthy of Survivor. On Friday 17th November, OpenAI had six people on its board. Three were founders: Greg Brockman, the chairman and president; Sam Altman, the CEO; and Ilya Sutskever, the chief science officer. These three worked at OpenAI.The other three did not. These were Quora CEO, Adam DAngelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology researcher, Helen Toner. Sutskever, DAngelo, McCauley, and Toner fired Altman on a Google Meet call. Brockman, a long-standing Altman ally, was absent. According to the more lurid accounts of the affair, the effective altruism tribethe decelerationists who see artificial general intelligence (AGI) as an existential risk to humanityhad spoken. They had used their 42 majority to vote the accelerationists off the island. Altmans torch had been snuffed out. Or so the board had thought.
Immediately after firing him, the four put out a statement: "Mr. Altmans departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI."
Greg Brockman, who had not been invited to the meeting, had also been blindsided. In the same statement, the four announced that Brockman was being removed from the board, but, somewhat naively, they expected him to stay on in his other roles in the company. Later that day, Brockman tweeted: ...based on todays news i quit. (Both Brockman and Altman tend to tweet in lowercase.)
Apparently, the only person outside the board not surprised by this move was the chief technology officer Mira Murati. She had been lined up the night before to be the interim CEO in Altmans place. Outside these five, it seems no one knew about the planned coup. The conspiracy was as tight as that for the assassination of Julius Caesar.
Microsoft was also surprised by Altman's ouster. Reportedly, CEO Satya Nadella was livid when told the news. OpenAI products are incorporated in Microsofts Azure cloud computing product. ChatGPT runs on Azure data centers. The main cost for players in the large language model (LLM) market is compute. Training LLMs with billions of parameters can cost hundreds of millions of dollars. After the news broke, Microsofts stocks dipped.
Altman was obviously not fired for financial reasons. Before the coup, $86 billion USD had been reported as the valuation of OpenAI, based on an oversubscribed share offering. This was much more than the valuation before ChatGPT. Altman had been negotiating a deal for existing shareholdersmany of whom are employees, apparentlyto cash in their chips and sell out. Some employees have been toiling away at OpenAI for years, long before it became the worlds hottest technology company. No doubt this explains the way things panned out.
Over the weekend of November 1819, Mira wobbled as interim CEO and was removed by the board. The board fought on, sounding out alternative interim CEOs over the course of a hectic weekend. Following his appointment as OpenAIs third CEO in three days, Emmett Shear declined to say why the board had fired Altmanalthough he did say it was not about AI safety. On Sunday, while the board was having discussions with prospective CEOs, Microsoft announced it would set up a new AI team led by Altman and Brockman.
The entire senior management team of OpenAI, except for Mira Murati, had been shocked by the announcement. Three of them quit on the Friday to follow Altman and Brockman wherever they were going. Among them was the leader of the wildly successful GPT-4 team. Over the weekend, Microsoft moved aggressively to protect its investment in OpenAI. According to unnamed Microsoft sources cited by the New Yorker, the Redmond software giant had three plans. Plan A was to deal with the new interim CEO. Plan B was to persuade the board to take back Altman, who was a proven performer at generating value for investors. Plan C was to take Altman and anyone else willing to follow him from Open AI into Microsoft. The OpenAI board (reportedly) reached out to rival Anthropic in a desperate attempt to merge with them. Anthropic, which comprises people who had left OpenAI because of disagreements with Altman over AI safety, declined to get involved.
By Monday the position of the board was hopeless. The reaction of most venture capital and finance commentators ranged from bemusement to outrage. They found it incomprehensible and unjustifiable that a CEO who had made OpenAI world-famous and extremely valuable should be fired.
Virtually all of the OpenAI staff agreed. By close of business on Monday, some 738 out of 770 employees (according to Wired) had signed an open letter calling on the board to resign. The signatories included Sutskever and Murati. Its language was as blunt as the boards when it had fired Altman: Your actions have made it obvious that you are incapable of overseeing OpenAI. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAI and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman.
The board had had no confidence in Altman, but the biggest shareholder and almost all the employees now had no confidence in the board.
Yet, in what seems to be a skillful and tenacious endgame, the CEO of Quora, Adam DAngelo, orchestrated a deal to bring Altman and Brockman back but maintain a footing for the decel camp. By Wednesday night, the deal was done. DAngelo became the sole survivor of the coup, keeping his seat on the board; Toner and McCauley resigned. Sutskever retained his position as chief science officer but lost his seat on the board. The Microsoft AI subsidiary popped down as rapidly as it had popped up. The OpenAI employees who had resigned rescinded their resignations. The new OpenAI board consists of DAngelo and two new external directors, Bret Taylor, formerly the co-CEO of Salesforce, and Larry Summers, a former Treasury Secretary. Neither Altman nor Brockman are on the board, but they are both back at OpenAI.
Evidently, Microsoft had decided on Plan B, which involved the restoration of Altman as CEO and the replacement of the board with people more sensitive to the interests of investors so they could continue their push to integrate OpenAI into Azure and Office. Realistically, Plan C would have resulted in intellectual property issues, years of litigation, and a rapid devaluation of the OpenAI stock held by employees. Plan B enabled the recent unpleasantness to be brushed aside with a gushing open letter from Altman wherein he proclaimed his love for all involved, including those who had blindsided him. To prevent a similar disruption in the future, Microsoft was promised a non-voting seat on the OpenAI board.
It took Arya Stark five seasons of Game of Thrones to get her vengeance. It took Sam Altman five days to crush his enemies, see them driven before him, and retweet the lamentations of the decels.
Rumour has it that, notwithstanding his repentance, Sutskevers position as chief science officer is under threat and that he has lawyered up.
This leads us to the Great Question of the Day: Why was Sam Altman sacked in the first place? This remains a mystery. The current board is not talking. Emmett Shear is not talking. He merely said it had not been about safety. Sam Altman is not talking. The previous board members are not talking. Microsoft is not talking, although their CEO said it was not about safety. No one involved is talking. There is a promise of an independent review. This will be done by lawyers and may never be made public. Time will tell. In the absence of attributed quotes from sources on record, rumours abound.
Some speak of personality clashes, for example, Altman taking exception to a paper written by Toneralthough this seems odd given that no one seems to have read it. However, some suggest Altman was using this as a pretext to seize power. After tipping Toner off the board, he could then replace her with someone more aligned with his views on the future of the company and more in favor of shipping AI products quickly. This may be true, but such drama is boardroom politics as usual in large enterprises.
Others speak of the board getting upset about conflicts of interest, for example, Altman seeking Middle Eastern funding for a new GPU maker. Given that DAngelos Quora has an AI product called Poe that is arguably a competitor to OpenAI and that Altman has no personal equity in OpenAI, its hard to take the conflict-of-interest objection too seriously. Everyone in Silicon Valley has a side hustle. Further, there is a shortage of GPUs. (GPUs are Graphics Processing Units. Originally designed for video games, their architecture is also well suited for the matrix algebra calculations that are heavily used in AI.) Demand is high because LLMs devour these things. Why would you not start a GPU maker in 2024? Given that OpenAI rents GPUs from Microsoft and does not make them itself, the conflict is unclear.
Still others speak of the accel/decel split between those who say AI should advance full steam ahead (accelerationists) and those calling for regulation and guardrails (decelerationists). Given that Microsoft denied that the affair was about safety; OpenAIs interim CEO, Emmett Shear, denied it was about safety; the alleged lead conspirator, Ilya Sutskever, is on record as being deeply concerned about AI safety; and that a previous split in the company that resulted in the departure of staff to found rival AI firm Anthropic was about safety, it is not reckless to believe that safety was the underlying reason for the coup. Even so, people seem reluctant to say so.
Some suggest the coup was triggered by Q* (Q star), a recent leaked OpenAI breakthrough that enables AI to do primary school math. The more lurid accounts suggest this heralds the end of the world. Other scuttlebutt suggests OpenAIs recent Dev Day, in which they unveiled new products that would make it easier for developers to ship AI products based on ChatGPT, was the trigger. This would make it easier for rogue developers to use AI to do dirty deeds, such as make biological weapons. Advocates of the accel/decel explanation note that two of the departed board members (Toner and MacCauley) have decelerationist links.
According to the OpenAI Charter, OpenAIs mission is to ensure that artificial general intelligence (AGI)by which we mean highly autonomous systems that outperform humans at most economically valuable workbenefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To this end, OpenAI has a structure where the for-profit company is ultimately controlled by a nonprofit.
It seems those who fired Altman thought he was not sufficiently concerned with safe and beneficial AGI but focused on shipping potentially dangerous, but profitable, products.
There are a range of views about the existential risk posed by AGI. At one extreme, you have Andrew Ng, who famously said that worrying about the rise of evil killer robots was like worrying about overpopulation on Mars when humans have not even got there yet. At the other, you have Ilya Sutskever, who, reportedly, believes that dangerous AGI is quarters away, not decades. In the middle, you have people who think that AGI is vaguely defined and its purported threat based on dubious extrapolations.
The light relief of the sagais the tale of a wit from archrival Anthropic, who sent boxes of paper clips to his ex-colleagues at OpenAIs office.
(Strictly speaking, the paperclip story and tweet predates the leadership coup.) The paper clips were a reference to the concern articulated by philosopher Nick Bostrom that a superintelligent AGI might decide the best thing it could do would be to optimize a goal such as paperclip production and divert all global resources to their production. In the process, it might accidentally starve humanity. The scenario is much-criticized, as it assumes there will only be one superintelligence to rule them all, not a series of competing superintelligences debating each other with impeccable logic and thinking things through in a perfectly rational manner.
Most of the catastrophic AGI scenarios require what is called FOOM, a fast take-off singularity in which a recursively self-improving AGI becomes all-powerful in a compressed timeframe. Many accelerationists dispute this possibility. Some cite thermodynamic limitations on information processing or dispute that IQ far more than that of a human is even possible. Others point out that more intelligence is not always useful and that the ability to come up with a plan to eliminate humanity is not the same as the ability to follow through on it. As for ourselves, we have published research pointing out that even a theoretically optimal intelligence must account for embodiment, which imposes mathematical upper limits on intelligence. A superintelligence would need to understand cause and effect, and that involves constructing a causal identity or self that is entirely absent from current AI. Whatever the nature of the objection, many dispute the validity of FOOM as a premise of doomsday scenarios. Certainly, to achieve human extinction, the AI will need robots to prevent humans from resisting by blowing up power lines, internet cables, and data centers. Robotics is notoriously slower to develop than AI because it must deal with the disorderly world of atoms rather than tidy mathematical bits.
Some do not see LLMs as the path to AGI at all. Consequently, they do not regard the likes of ChatGPT as an existential threat. Indeed, there have been suggestions of diminishing marginal returns for ever-larger LLMs. Ben Goertzel, who co-steers the annual AGI conference with Marcus Hutter of Google Deepmind, holds that incremental improvement of such LLMs is not a viable approach to working toward human-level AGI.
LLMs require massive amounts of data to mimic the understanding of concepts that a human can gather from a few examples. We see the future of genuine AGI in systems that learn as efficiently as humans do, with novel inventions such as artificial phenomenal consciousness (that, at present, remain purely theoretical) rather than in a few billion more parameters in LLMs obtained in data centers running thousands of GPUs.
The bottom line is that decelerationists are making extraordinary claims and demanding restrictions on the basis of contested theoretical speculations.Smaller competitors contend that the licensing or other red tape that might be applied in the case of AI will serve large incumbents. Heavy regulation and AI compliance costs would make it harder for smaller startup rivals.
What Are Reasonable AI Fears?
Although there are some valid concerns, an AI moratorium would be misguided.
OpenAI does not have a monopoly on achieving AGI. There is a horde of competing firms. Days after the coup and counter-coup, Google launched Gemini, claiming it beat ChatGPT on many of the metrics used to measure LLM performance. Besides Google, there are other major rivals such as Meta, Anthropic, and Grok. There are dozens of smaller firms, each seeking to make the next AI breakthroughand not necessarily by training LLMs. Foundational research is published online. Researchers at Google published the paper that started the transformer boom in 2017. The genie is out of the bottle. The beneficial applications of AI are numerous, and regulation of the technology (rather than its applications) is impractical. Regulations cannot stop countries (or individuals) working towards AGI. If the US Congress and the EU decide to overregulate AI, talent and investment will go elsewhere. Strategic rivals such as China might overtake the West and impose a one-party, AI-enforced dictatorship. We believe it is essential that the open societies of the West retain their technological edge.
Of course, it may not be possible to accelerate faster than we are. There is a limited supply of talent. Cutting-edge AI papers are not easy reads. People focus on successes and tend to forget failures. AI has boomed and busted many times. Software engineers are notoriously optimistic about timelines. What they say will take a month or two will often take a year or ten. Leonardo da Vinci designed helicopters and tanks in the sixteenth century; they did not ship until the twentieth.
Embedding moral competence in AIs is the obvious answer to aligning AI with human values, and, indeed, that is what lies at the heart of rival Anthropics strategy of constitutional AI. Even so, we must recognize that acute and chronic moral disagreement is endemic among human beings. We see this in Gaza and Ukraine. For all their talk of alignment as a solution to safety and to ensure beneficial AGI, the six members of the OpenAI board could not align themselves. It is perhaps more sensible to align AI with the laws of the jurisdiction in which it operates. No doubt, some AI firms will decide not to operate in certain jurisdictions due to their distaste for local norms and governance. Given that LLMs have already passed bar exams, we see the implementation of law-abiding AI as realistic and achievable. Getting humans to align on which laws to abide by is the real challenge.
To conclude, we do not envisage a future in which some monopolist singularity of an AGI becomes self-aware and, in the manner of Skynet, exterminates humanity. Nor do we see a future when some superintelligent zombie AGI decides to optimize for paper clip production and starve us by accident. Rather, we see a world of beneficial cooperation and competition between firms publishing papers and developing rival products in the pursuit of excellence. We predict a competitive multiplicity of AGIs with empathic functionality that are incrementally improved by a free market and that will brighten the light of human consciousness, not extinguish it.
Go here to read the rest:
To Accelerate or Decelerate AI: That is the Question - Quillette
- 12 Days: Pole Attachment Changes in 2023 Set Stage for BEAD Implementation - BroadbandBreakfast.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 - Medium [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Its critical to regulate AI within the multi-trillion-dollar API economy - TechCrunch [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- MIT Sloan Executive Education Launches New Course Offerings Focused on AI Business Strategies - AiThority [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- The Era of AI: 2023's Landmark Year - CMSWire [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- AGI Will Not Take All Our Jobs. But it will change all our jobs | by Russell Lim | Predict | Dec, 2023 - Medium [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Policy makers should plan for superintelligent AI, even if it never happens - Bulletin of the Atomic Scientists [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- 4 reasons why this AI Godfather thinks we shouldn't be afraid - TechRadar [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- AI consciousness: scientists say we urgently need answers - Nature.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- On the Eve of An A.I. Extinction Risk'? In 2023, Advancements in A.I. Signaled Promise, and Prompted Warnings from ... - The Debrief [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Superintelligence Unleashed: Navigating the Perils and Promises of Tomorrow's AI Landscape with - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Stand with beneficial AGI | Sharon Gal Or | The Blogs - The Times of Israel [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Top 5 Myths About AI Debunked. Unraveling the Truth: Separating AI | by Michiel Meire | Dec, 2023 - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- What Is Artificial Intelligence (AI)? - Council on Foreign Relations [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- AGI predictions for 2024. The major LLM players, as well as | by Paul Pallaghy, PhD | Dec, 2023 - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Meta's long-term vision for AGI: 600,000 x NVIDIA H100-equivalent AI GPUs for future of AI - TweakTown [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Chrome Will Use Experimental AI Feature to Organize Tabs, Write Reviews - CNET [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Meta is joining its two AI teams in pursuit of open-source artificial general intelligence - MediaNama.com [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Tech CEOs Agree Human-Like AI Is Inevitable. Just Don't Ask Them What That Means - The Messenger [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Dream On, Mark Zuckerberg. Your New AI Bet Is a Real Long Shot - CNET [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Demystifying AI: The ABCs of Effective Integration - PYMNTS.com [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Artificial Intelligence & The Enigma Of Singularity - Welcome2TheBronx [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Former Tesla FSD Leader, Karpathy, Offers Glimpse into the Future of Artificial General Intelligence - Not a Tesla App [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Dembski: Does the Squawk Around AI Sound Like the Tower of Babel? - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI child unveiled by award-winning Chinese scientist in Beijing - South China Morning Post [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- I Asked GPT4 To Predict The Timeline Of Breakthrough Advances In Superintelligence Until The Year - Medium [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AGI: A Big Deal(Baby) That Needs Our Attention - Medium [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Human Intelligence Is Fundamentally Different From Machine Intelligence - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- What is Artificial General Intelligence (AGI) and Why Should You Care? - AutoGPT Official - AutoGPT [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Artificial General Intelligence: Unleashing a New Era of Accessibility for People with Disabilities - Medriva [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Scholar Creates AI-Powered Simulated Child - Futurism [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AGI: Machines vs. Organisms - Discovery Institute [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Meta's AI boss says LLMs not enough: 'Human level AI is not just around the corner' - Cointelegraph [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Understanding the Intersection between AI and Intelligence: Debates, Capabilities, and Ethical Implications - Medriva [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Future of Artificial Intelligence: Predictions and Impact on Society - Medriva [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Amazon AGI Team Say Their AI Is Showing "Emergent Abilities" - Futurism [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- There's AI, and Then There's AGI: What You Need to Know to Tell the Difference - CNET [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Will Artificial Intelligence Increase Unemployment? No. - Reason [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- The Babelian Tower Of AI Alignment - NOEMA - Noema Magazine [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Elon Musk's xAI Close to Raising $6 Billion - PYMNTS.com [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Learning Before Legislating in Texas' AI Advisory Council - dallasinnovates.com [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- The Future of Generative AI: Trends, Challenges, & Breakthroughs - eWeek [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Drink the Kool-Aid all you want, but don't call AI an existential threat - Bulletin of the Atomic Scientists [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- What Ever Happened to the AI Apocalypse? - New York Magazine [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- What aren't the OpenAI whistleblowers saying? - Platformer [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Former OpenAI researcher foresees AGI reality in 2027 - Cointelegraph [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- What is Artificial Intelligence and Why It Matters in 2024? - Simplilearn [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Employees claim OpenAI, Google ignoring risks of AI and should give them 'right to warn' public - New York Post [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? - The New York Times [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- AGI in Less Than 5 years, Says Former OpenAI Employee - - 99Bitcoins [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Former OpenAI Employee Says Company Had Plan to Start AGI Bidding War With China and Russia - Futurism [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- OpenAI and Googles AI systems are powerful. Where are they taking us? - Vox.com [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Exploring Safe Superintelligence: Protecting Humanity in the Age of AI - PUNE.NEWS - PUNE.NEWS [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- The role of artificial intelligence in modern society - Business Insider Africa [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Bill Gates Says A.I.'s Impact on the Legal System Could 'Change Justice' - Observer [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Generative AI | potential in attractions industry - blooloop [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- What's the worst that AI can do to our industry (and can we prevent it)? - Unmade: media and marketing analysis [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Hottest AI Use Case For Advisors: Note-Taking Apps That Generate Action Items - Family Wealth Report [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Artificial General Intelligence, Shrinkflation and Snackable are Some of the Many New, Revised, or Updated Words and ... - LJ INFOdocket [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- A Reality Check on Superhuman AI - Nautilus - Nautilus [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- What Would You Do If You Had 8 Years Left to Live? - Uncharted Territories [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Is It Sinking In? Chatbots Will *Not* Soon Think Like Humans - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- AGI: The New Fire? Demis Hassabis Predicts AI Revolution - Wall Street Pit [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- ChatGPT Is 4 Levels Away From AGI In OpenAI's Scale - Dataconomy [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- Skild AI grabs $300M to build foundation model for robotics - Robot Report [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- 3 reasons why Artificial General Intelligence is a hallucination - Techpoint Africa [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- OpenAI is plagued by safety concerns - The Verge [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]