The Gist
Artificial general intelligence (AGI) has been the Holy Grail of AI for many decades. AGI is an application of strong AI that is defined as AI that can perform as well or better than humans on a wide range of cognitive tasks. There is much debate over when artificial general intelligence may be fully realized, especially with the current evolution of large language models (LLMs). For many people, AGI is something out of a science fiction movie that remains mostly theoretical. Others believe we have already reached AGI with the latest releases of Chat-GPT4o and Gemini Advanced.
Historically, we have used the Turing test as the measurement to determine if a system has reached artificial general intelligence. Created by Alan Turing in 1950 and originally called the Imitation Game, the test is largely based on three participants, an interrogator whose asks questions to the machine and human, the machine or system and the human who answers the question alongside the machine for comparison.
The criticism of the test is that it doesnt measure intelligence or any other human qualities. The foundational assumption that an interrogator can determine if a machine is thinking by comparing its behavior with human behavior has a lot of subjectivity and is not necessarily deterministic.
There is also lack of consensus on whether the modern LLMs have actually achieved AGI. In June 2022, Google claimed LaMDAhad passed the test, but critics quickly dismissed this as an advancement in fooling people you have intelligence rather than advancing toward AGI. The reality is that the test has outlived its usefulness.
Ray Kurzweil, a technology futurist, has spent much of his career making predictions on when we will reach AGI. In his recent talk at SXSW, he said he is sticking to his original prediction in 1999 that AI will match/surpass human intelligence by 2029.
But how will we know?
Related Article:The Quest for Achieving Artificial General Intelligence
Horizontal AI products like ChatGPT, Gemini, Midjourney, Dall-E have given millions of users exposure to the power of AI. To many, these AI platforms seem very smart as they can generate answers, compose songs and write code in seconds.
However, there is a big difference between AI and AGI. These current AI platforms are essentially highly efficient prediction machines because they have been trained on a large corpus of data. However, that does not enable creativity, logical reasoning and sensory perception.
As we move closer to artificial general intelligence, we need an accepted definition of AGI and a framework that truly measures these critical aspects of intelligence such as reasoning, creativity and sentience.
One approach is to consider artificial general intelligence as an end-to end intelligence supply chain encompassing all the capabilities needed to achieve AGI.
We can group the critical components needed for AGI into four major categories as follows:
Todays AI systems are mostly excelling at 1 and 2. For artificial general intelligence to be attained, we will need systems that can accomplish 3 and 4.
Achieving AGI will require further advances in algorithms, computing and data than what powers the models of today. Mimicking complex human behavior such as creativity, perceptions, learning and memory will require embodied cognition or learning from a multitude of senses or inputs. We also need systems and infrastructure that go beyond training.
Human intelligence is heavily based on logical reasoning. We understand cause and effect, deduce information from existing knowledge and make inferences. Reasoning algorithms let a system traverse knowledge representations, drawing conclusions and finding solutions. This goes beyond basic pattern matching, enabling a more humanlike problem-solving ability. Replicating similar processes is fundamental for an AI to achieve AGI.
The timing of artificial general intelligence remains uncertain, but when it does, its going to impact our lives, businesses and society significantly.
The real power of AI technology is still ahead of us.
Related Article:Can We Fix Artificial Intelligence's Serious PR Problem?
One of the prerequisites for achieving artificial general intelligence is the capability for AI inference, which is when an AI model produces accurate predictions or conclusions. Much of the computing power today is focused on model training. Model training is the stage when data is fed into a learning algorithm to produce a model. Training enables AI models to make accurate predictions when prompted.
AI can be divided into two major market segments training and inference. Today, many companies are focused on creating high-performance hardware for data center providers to conduct massive AI model training. For instance, Nvidia, controls more than 95% of the specialized AI chip market. They sell to major tech companies like Amazon, Meta, and Microsoft, which are believed to make up roughly 40% of its revenue.
However, the market will soon shift its focus to building inferencing infrastructure for generative AI applications. The inferencing market will quickly grow as Fortune 500 companies that are currently testing generative AI applications move into production deployment. New applications will also emerge that will require scale to support workloads across centralized cloud, edge computing and IoT (Internet of Things) devices.
Model training is a very computationally intensive process that takes a lot of time to complete. Inference is usually faster and much less resource-intensive. Inferencing boils down to running AI applications or workloads after models have been trained.
Inference is going to be 100 times bigger than training. Nvidia is really good at training but is not ideal for inference.
A pivot from training to inference may not be easy.
Nvidia was founded in 1993 long before the AI craze we see today. They were not initially focused on supplying AI hardware and software solutions and instead focused on creating graphics cards. As the PC market expanded and new applications such as Windows and gaming became prevalent, it became necessary to have dedicated hardware to handle the complicated tasks of 3D graphics processing. The opportunity to create high-performance processing units to support intensive computational operations in the PC and gaming market was not something that happens very often.
It turns out Nvidia struck gold with its GPU architectures. GPUs are well suited for AI for three primary reasons. They employ parallel processing; the systems scale up through high-performance interconnections creating supercomputing capabilities and the software for managing and tuning the stack for AI is broad and deep.
The idea of having separate hardware existed before Nvidia came onto the scene. For instance, the first Atari video game consoles, shipped in the 1970s, had graphics chips inside. And IBM had released the Professional Graphics Controller (PGA) which used an onboard Intel 8088 microprocessor to do video tasks. Silicon Graphics Inc or SGI also emerged as a dominant graphics player in the market in the late 1980s.
Things changed rapidly in 1993 with the release of a 3D game called Doom by game developer Id Software. Doom was the first mature, action-packed first-person shooter game on the market. Quake quickly followed and offered brand-new technical breakthroughs such as full real-time 3D rendering and online multiplayer. This paved the way for the dedicated graphics card market.
Nvidia didnt immediately rise to fame. The first product came in May 1995, called the NV1, which was a multimedia PCI card with graphics, sound, and gamepad support. However, the product flopped as the NV1 was not compatible with the leading graphics APIs at the time OpenGL, 3Dfx's Glide, etc. It wasnt until the Riva 128, launched in 1997 that the company saw success. At the time of launch, Nvidia had less than six weeks of cash left in the bank!
By the early 2000s, the graphics card market had drastically consolidated from over 30 to just three: Nvidia, ATI, and Intel taking up the low end. Nvidia coined the phrase General Processing Unit, or GPU, and set its sights on the broader compute market.
The opportunity to create new businesses in adjacent markets, outside your core business, is not something you see frequently. A shining example was Amazon, an online commerce company, that created a cloud computing platform, Amazon Web Services (AWS) from the technology components they created to run a massively scalable commerce platform. Uber, a ride-sharing company leveraged its backend infrastructure to launch a food delivery service called UberEATS.
In a similar fashion, Nvidia realized that its graphic processing units (GPUs) that powered many of the graphics hardware boards in PCs and gaming consoles had another use in accelerating mathematical operations. By investing in making GPUs programmable, they opened up their parallel processing capabilities to a wider variety of applications. This enabled high-performance computing to be more readily accessible and run on commodity hardware.
Their first venture into the high-performance computing (HPC) space with its CUDA parallel computing architecture, enabling GPUs to be used for general-purpose computing tasks. This capability helped sparked early breakthroughs in modern AI. Initial AI applications like Alexnet, a convolutional neural network (CNN) used to classify images, was unveiled in 2012. It was trained using just two of Nvidia's programmable GPUs.
The big discovery was that GPUs could massively accelerate neural network processing, or model training. As this began to spread among computer and data scientists, demand for Nvidias GPUs soared. In some ways, the AI revolution found Nvidia.
But that was just the beginning. Nvidias relentless pursuit of innovation led to a series of breakthrough architectures starting with the Turing architecture in 2018,which fused real-time ray tracing, AI, simulation, and rasterization to fundamentally change the way graphics processing worked. Turing featured new tensor cores, processors that accelerate deep learning training and inference, providing up to 500 trillion tensor operations per second. Tensor cores are essential building blocks of the NVIDIA solution that incorporates hardware, networking, software, libraries and optimized AI models. Tensor cores deliver significantly faster AI training times compared to traditional CUDA cores alone, which are primarily designed for general-purpose processing tasks and excel in parallel computing.
Nvidias rapid rate of innovation continued with subsequent architectural advancements with Ampere, Volta, Lovelace, Hopper and now Blackwell architectures. The H100 Tensor Core GPU was the first based on the Hopper architecture with over 80 billion transistors, built-in transformer engine, advanced NVLink inter-GPU communications and a second-generation multi-instance GPU (MIG).
The growth of computational power used to be governed by Moores Law, which predicted a doubling roughly every two years. Nvidias new Blackwell GPU has shattered expectations, increasing computational speed by over a thousand times in just eight years.
Whats good for training may not be good for inference.
There are still a limited number of AI applications in production today. Outside of a few large tech companies, very few corporations have advanced to running large-scale AI models in production. So most of the hardware focus has been on optimizing the hardware platform for training.
As the number of AI applications increases, the amount of compute a company uses for running models to respond to end-user requests will increase significantly. This will exceed the cost theyre spending on training today. The focus will then shift to optimizing hardware to reduce inference costs.
GPUs are well suited for the computational complexity of training. The workloads make it possible to split work across a few GPUs that are tightly interconnected. That makes reducing latency by distributing across low-end CPUs unrealistic.
However, this is not true for inference. The model weights are fixed and can easily be duplicated across many machines, so no communication is needed. This makes an army of commodity PCs and CPUs very appealing for applications relying on inference.
New companies like Groq are emerging that have the potential to be serious competitors in the AI chip market. This could pose a threat to Nvidia's dominance in the AI world.
Today, all the AI giants heavily rely on Nvidia to supply them with computing cards for mostly AI training with smaller demands on inference. The latest product, the H100 is still in high demand, remains costly (about $35,000 each) and only achieves inference speeds of 30-40 tokens per second. Compared to inference, training requires more stringent computing card specifications, especially in terms of memory size, which is growing close to 300 GB per card.
Groq's approach to neural network acceleration is radically different from Nvidias. The architecture opts for a single large processor with hundreds of functional units, which significantly reduces instruction decoding overhead. This architecture allows superior performance and reduced latencies, ideal for cloud services requiring real-time inferences.
Groqs secret sauce is its Logic Processing Unit (LPU) inference engines that are specifically engineered to address the two major bottlenecks faced by Large Language Models (LLMs) compute capacity and memory bandwidth. The LPU systems boast comparable, if not superior, compute power to GPUs and have eliminated external memory bandwidth bottlenecks, enabling faster generation of text sequences.
The realization that computational power was a bottleneck for AIs potential led to the inception of Groq and the creation of the LPU. Jonathan Rosswho initially began what became the TPU project at Google started Groq in 2016.
Nvidia remains well entrenched and will likely not be easy to dethrone. However, Groq has demonstrated that its vision of an innovative processor architecture can compete with industry giants.
There are tools emerging for machine learning that enable more efficient inferencing. Developed by Georgi Gerganov (the GG in GGML), GGML has emerged as a powerful and versatile tensor library, empowering developers to build and deploy high-performance machine learning applications across a wide spectrum of devices. It is designed to bring large-scale machine-learning models to commodity devices.
GGML is a lightweight engine that runs neural networks on C++. This is significant because it's fast, has no dependencies (pure C++) it's multi-platform, and can be easily ported to devices such as mobile phones. It defines a binary format for distributing large language models (LLMs) using quantization, a technique that allows LLMs to run on consumer hardware with effective CPU inferencing. It enables these big models to run on the CPU as fast as possible.
The benefit of GGML is it requires fewer resources to run, typically 4x less RAM requirements, and 4x less RAMbandwidthrequirements, and thus faster inference on the CPU.
Traditionally, inference is done on centralized servers in the cloud. However, tools like GGML are making it possible to do model inference on commodity devices at the network's edge. That is critical for low latency use cases like in self-driving cars.
GGML is empowering AI developers to harness the full potential of machine learning on everyday hardware. It provides an impressive array of features, is an open standard and has been optimized for Apple Silicon. GGML is poised to play a pivotal role in shaping the future of edge computing.
The future of AI is undoubtedly headed toward inference-centric workloads. While the training of LLMs and other complex AI models gets a lot of current attention, inference makes up the vast majority of actual AI workloads.
Enterprises should begin to understand how inference works and how it will help enable better use of AI to improve their products and services.
Learn how you can join our contributor community.
See the article here:
AGI and jumping to the New Inference Market S-Curve - CMSWire
- 12 Days: Pole Attachment Changes in 2023 Set Stage for BEAD Implementation - BroadbandBreakfast.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 - Medium [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- To Accelerate or Decelerate AI: That is the Question - Quillette [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Its critical to regulate AI within the multi-trillion-dollar API economy - TechCrunch [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- MIT Sloan Executive Education Launches New Course Offerings Focused on AI Business Strategies - AiThority [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- The Era of AI: 2023's Landmark Year - CMSWire [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- AGI Will Not Take All Our Jobs. But it will change all our jobs | by Russell Lim | Predict | Dec, 2023 - Medium [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Policy makers should plan for superintelligent AI, even if it never happens - Bulletin of the Atomic Scientists [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- 4 reasons why this AI Godfather thinks we shouldn't be afraid - TechRadar [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- AI consciousness: scientists say we urgently need answers - Nature.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- On the Eve of An A.I. Extinction Risk'? In 2023, Advancements in A.I. Signaled Promise, and Prompted Warnings from ... - The Debrief [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Superintelligence Unleashed: Navigating the Perils and Promises of Tomorrow's AI Landscape with - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Stand with beneficial AGI | Sharon Gal Or | The Blogs - The Times of Israel [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Top 5 Myths About AI Debunked. Unraveling the Truth: Separating AI | by Michiel Meire | Dec, 2023 - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- What Is Artificial Intelligence (AI)? - Council on Foreign Relations [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- AGI predictions for 2024. The major LLM players, as well as | by Paul Pallaghy, PhD | Dec, 2023 - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Meta's long-term vision for AGI: 600,000 x NVIDIA H100-equivalent AI GPUs for future of AI - TweakTown [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Chrome Will Use Experimental AI Feature to Organize Tabs, Write Reviews - CNET [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Meta is joining its two AI teams in pursuit of open-source artificial general intelligence - MediaNama.com [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Tech CEOs Agree Human-Like AI Is Inevitable. Just Don't Ask Them What That Means - The Messenger [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Dream On, Mark Zuckerberg. Your New AI Bet Is a Real Long Shot - CNET [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Demystifying AI: The ABCs of Effective Integration - PYMNTS.com [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Artificial Intelligence & The Enigma Of Singularity - Welcome2TheBronx [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Former Tesla FSD Leader, Karpathy, Offers Glimpse into the Future of Artificial General Intelligence - Not a Tesla App [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Dembski: Does the Squawk Around AI Sound Like the Tower of Babel? - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI child unveiled by award-winning Chinese scientist in Beijing - South China Morning Post [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- I Asked GPT4 To Predict The Timeline Of Breakthrough Advances In Superintelligence Until The Year - Medium [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AGI: A Big Deal(Baby) That Needs Our Attention - Medium [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Human Intelligence Is Fundamentally Different From Machine Intelligence - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- What is Artificial General Intelligence (AGI) and Why Should You Care? - AutoGPT Official - AutoGPT [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Artificial General Intelligence: Unleashing a New Era of Accessibility for People with Disabilities - Medriva [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Scholar Creates AI-Powered Simulated Child - Futurism [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AGI: Machines vs. Organisms - Discovery Institute [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Meta's AI boss says LLMs not enough: 'Human level AI is not just around the corner' - Cointelegraph [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Understanding the Intersection between AI and Intelligence: Debates, Capabilities, and Ethical Implications - Medriva [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Future of Artificial Intelligence: Predictions and Impact on Society - Medriva [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Amazon AGI Team Say Their AI Is Showing "Emergent Abilities" - Futurism [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- There's AI, and Then There's AGI: What You Need to Know to Tell the Difference - CNET [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Will Artificial Intelligence Increase Unemployment? No. - Reason [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- The Babelian Tower Of AI Alignment - NOEMA - Noema Magazine [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Elon Musk's xAI Close to Raising $6 Billion - PYMNTS.com [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Learning Before Legislating in Texas' AI Advisory Council - dallasinnovates.com [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- The Future of Generative AI: Trends, Challenges, & Breakthroughs - eWeek [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Drink the Kool-Aid all you want, but don't call AI an existential threat - Bulletin of the Atomic Scientists [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- What Ever Happened to the AI Apocalypse? - New York Magazine [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- What aren't the OpenAI whistleblowers saying? - Platformer [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Former OpenAI researcher foresees AGI reality in 2027 - Cointelegraph [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- What is Artificial Intelligence and Why It Matters in 2024? - Simplilearn [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Employees claim OpenAI, Google ignoring risks of AI and should give them 'right to warn' public - New York Post [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? - The New York Times [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- AGI in Less Than 5 years, Says Former OpenAI Employee - - 99Bitcoins [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Former OpenAI Employee Says Company Had Plan to Start AGI Bidding War With China and Russia - Futurism [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- OpenAI and Googles AI systems are powerful. Where are they taking us? - Vox.com [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Exploring Safe Superintelligence: Protecting Humanity in the Age of AI - PUNE.NEWS - PUNE.NEWS [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- The role of artificial intelligence in modern society - Business Insider Africa [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Bill Gates Says A.I.'s Impact on the Legal System Could 'Change Justice' - Observer [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Generative AI | potential in attractions industry - blooloop [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- What's the worst that AI can do to our industry (and can we prevent it)? - Unmade: media and marketing analysis [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Hottest AI Use Case For Advisors: Note-Taking Apps That Generate Action Items - Family Wealth Report [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Artificial General Intelligence, Shrinkflation and Snackable are Some of the Many New, Revised, or Updated Words and ... - LJ INFOdocket [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- A Reality Check on Superhuman AI - Nautilus - Nautilus [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- What Would You Do If You Had 8 Years Left to Live? - Uncharted Territories [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Is It Sinking In? Chatbots Will *Not* Soon Think Like Humans - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- AGI: The New Fire? Demis Hassabis Predicts AI Revolution - Wall Street Pit [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- ChatGPT Is 4 Levels Away From AGI In OpenAI's Scale - Dataconomy [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- Skild AI grabs $300M to build foundation model for robotics - Robot Report [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- 3 reasons why Artificial General Intelligence is a hallucination - Techpoint Africa [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- OpenAI is plagued by safety concerns - The Verge [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]