In the field of artificial intelligence, OpenAI, led by CEO Sam Altman, along with the companys ChatGTP chatbot and its mysterious Q* AI model, have emerged as leading forces within Silicon Valley.
While advancements in AI may hold the potential for positive future developments, OpenAIs Q* and other AI platforms have also led to concerns among government officials worldwide, who increasingly warn about possible threats to humanity that could arise from such technologies.
Among the years most significant controversies involving AI, in November Altman was released from his duties as CEO of OpenAI, only to be reinstated 12 days later amidst a drama that left several questions that, to date, remain unresolved.
On November 22, just days after Altmans temporary ousting as the CEO of OpenAI, two people with knowledge of the situation told Reuters that several staff researchers wrote a letter to the board of directors, which had reportedly warned about a powerful artificial intelligence discovery that they said could threaten humanity, the report stated.
In the letter addressed to the board, the researchers highlighted the capabilities and potential risks associated with artificial intelligence. Although the sources did not outline specific safety concerns, some of the researchers who authored the letter to OpenAIs board had reportedly raised concerns involving an AI scientist team comprised of two earlier Code Gen and Math Gen teams, warning that the new developments that aroused concern among company employees involved aims to upgrade the AIs reasoning abilities and ability to engage in scientific tasks.
In a surprising turn of events that occurred two days earlier on November 20, Microsoft announced its decision to onboard Sam Altman and Greg Brockman, the president of OpenAI and one of its co-founders, who had resigned in solidarity with Sam Altman. Microsoft said at the time that the duo was set to run an advanced research lab for the company.
Four days later Sam Altman was reinstated as the CEO of OpenAI after 700 of the companys employees threatened to quit and join Microsoft. In a recent interview with Altman, he disclosed his initial response to his invitation to return following his dismissal, saying it took me a few minutes to snap out of it and get over the ego and emotions to then be like, Yeah, of course I want to do that, Altman told The Verge.
Obviously, I really loved the company and had poured my life force into this for the last four and a half years full-time, but really longer than that with most of my time. And were making such great progress on the mission that I care so much about, the mission of safe and beneficial AGI, Altman said.
But the AI soap opera doesnt stop there. On November 30, Altman announced that Microsoft would join OpenAIs board. The tech giant, holding a 49 percent ownership stake in the company after a $13 billion investment, will assume a non-voting observer position on the board. Amidst all this turmoil, questions remained about what, precisely, the new Q* model is, and why it had so many OpenAI researchers concerned.
Q* (pronounced Q-star) is believed to be a project within OpenAI that aims to use machine learning for logical and mathematical reasoning. According to reports, OpenAI has been training its AI to perform elementary school-level mathematics. Concerned employees at OpenAI had reportedly said Q* could represent a breakthrough in the companys efforts to produce artificial general intelligence (AGI) that could surpass humans in the performance of various tasks, especially those that are economically valuable.
One source told Reuters on background last month that Q* could solve certain math problems extremely well, and that while the model is currently only as good as a grade-school student at math, the fact that it aced those tests has researchers feeling hopeful about Q*s future success, the source said.
An ability to master mathematics suggests that AI could possess enhanced reasoning abilities similar to human intelligence, and experts believe this capability could have tremendous potential for groundbreaking scientific research. Nonetheless, in the aftermath of last months OpenAI drama, questions remain about the new technologies in development by the company which prompted at least some of its employees to think they could potentially threaten humanity.
Looking back on the evolution of AI during 2023, several political figures from around the world have also shared their perspectivesand potential concernsabout the potential threats AI could represent if left unbridled.
On May 30, the Communist Party in China made a public statement warning countries around the world of the risks AI poses to future advancements, and called for heightened national security measures. After a meeting in May chaired by President Xi Jinping, the party leader emphasized the conflict between the governments goal of being a global leader in advanced technology and worries about the potential negative impacts of these technologies on society and politics.
It was stressed at the meeting that the complexity and severity of national security problems faced by our country have increased dramatically, the Chinese state-run Xinhua News Agency reported after the meeting.
More recently, Jinping encouraged nations to join forces in addressing challenges posed by artificial intelligence this past November at the World Internet Conference Summit in the eastern city of Wuzhen, where he said China is ready to promote the safe development of AI. Li Shulei, director of the Communist Partys publicity department, echoed Xis statements at the conference, expressing Chinas commitment to collaborate with other nations to improve the safety, reliability, controllability and fairness of artificial intelligence technology.
Before the APEC Summit in San Francisco this past November there were that Biden and Xi might announce an agreement to restrict the use of artificial intelligence, particularly in areas like nuclear weapons control. However, no such agreement was reached. However, Biden later stated that, were going to get our experts together to discuss risk and safety issues associated with artificial intelligence.
On December 12 at the Global Partnership on Artificial Intelligence Summit in Delhi, Prime Minister Narendra Modi emphasized the potential danger posed by artificial intelligence. Some of the threats Modi highlighted consisted of deep fake technology and potential terrorist activity that might leverage AI, although Modi also said he expects great things for his country that could result from AI, which includes the potential to revolutionize Indias tech landscape.
AI has several positive impacts, Modi said, but it could also have many negative impacts and this is a matter of concern. AI can become the biggest tool to help humanitys development in the 21st century. But it can also play a major role in destroying us. Deepfake, for example, is a challenge for the world.
If AI weapons reach terrorist organisations, it could pose a threat to global security. We have to move quickly to create a global framework for ethical use of AI among G20 countries. We have to take such steps together (so) that we take responsible steps, the Prime Minister said.
Just last month, the Prime Minister urged for secure steps to ensure the safety of AI across all sectors of society and called on G20 nations to join forces on this issue, emphasizing the importance of AI reaching people while prioritizing safety.
On October 7, Canadian Innovation Minister Franois-Philippe Champagne addressed the development and authority of artificial intelligence, the minister was clear in stating that his role was to shift the narrative from fear to opportunity. But decided to remain silent when asked if he viewed AI as a potential threat to humanity, he reframed from stating his thoughts or position on the question.
CTVs Question Period hosted an interview with Champagne the following day on Sunday, October 8, and expressed to the host, Vassy Kapelos the importance of transparency when interacting and managing AI technology.
Champagne voiced his thoughts and said he advocates for an AI framework that navigates Canadians concerns with the advancement of technology and the development of responsible innovation. Champagne also said he would let the experts debate what it could do, emphasizing his primary duty is to steer the shift from fear to opportunity.
When questioned again about his opinions on whether AI is a threat, Champagne said there is a sense of anxiety, but at the same time, AI can do great things for humanity, adding that Its for us to decide what we want AI to be.
Artificial intelligence (AI) technologies offer promise for improving how the Government of Canada serves Canadians. As we explore the use of AI in government programs and services, we are ensuring it is governed by clear values, ethics, and laws, reads a statement on the Canadian Government website.
The Canadian government has been defining AI regulations since June 2022 with the Artificial Intelligence and Data Act, baked into the larger Bill C-27.
However, critics and experts have stated Bill C-27 and the voluntary code of conduct are too ambiguous.
I am hopeful it can do good things for humanity, Champagne said in response to a question about whether AI scares him. But at the same time, we need to prevent the really bad stuff that you say experts have been warning us (about).
President Frank-Walter Steinmeier of Germany has advocated for enhanced digital literacy in society to address the threats that the swift integration of artificial intelligence poses to democracy.
Steinmeier said in June that such concerns are becoming more pressing, especially as disinformation can be rapidly created and disseminated, instilling fears and confusion in the public, discrediting science, and destabilizing financial markets.
Steinmeier added that societies should develop ethical and legal frameworks to watch over AI, whether its being used to help with decision-making processes or for people to uncover instances when its being used maliciously.
Weve been warned that potentially uncontrollable risks are coming our way, Steinmeier said. And that deserves our attention.
Russian President Vladimir Putin has also entered the global AI conversation on the competition for AI development. Putin forecasted that the nation at the forefront of AI research would assert dominance in global affairs.
Artificial intelligence is the future, not only for Russia, but for all humankind, Putin said to students during a Russian Knowledge Day event earlier this year. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.
The overall threat AI potentially represents to humanity raises not only political concerns as nations compete to leverage the technology, but also issues resulting from the unforeseen consequences of how a future superintelligence might behave. The concern that AI might one day overtake humanity was once a narrative relegated only to science fiction. However, today as such technologies advance, voices around the world are increasingly urging that care must be employed in their development, to mitigate the numerous dangerous potentials that could arise from the misuse of machine intelligence.
Chrissy Newton is a PR professional and founder of VOCAB Communications. She hosts the Rebelliously Curious podcast, which can be found onThe DebriefsYouTube Channel. Follow her on X:@ChrissyNewton and at chrissynewton.com.
Read more:
- 12 Days: Pole Attachment Changes in 2023 Set Stage for BEAD Implementation - BroadbandBreakfast.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 - Medium [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- To Accelerate or Decelerate AI: That is the Question - Quillette [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Its critical to regulate AI within the multi-trillion-dollar API economy - TechCrunch [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- MIT Sloan Executive Education Launches New Course Offerings Focused on AI Business Strategies - AiThority [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- The Era of AI: 2023's Landmark Year - CMSWire [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- AGI Will Not Take All Our Jobs. But it will change all our jobs | by Russell Lim | Predict | Dec, 2023 - Medium [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Policy makers should plan for superintelligent AI, even if it never happens - Bulletin of the Atomic Scientists [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- 4 reasons why this AI Godfather thinks we shouldn't be afraid - TechRadar [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- AI consciousness: scientists say we urgently need answers - Nature.com [Last Updated On: December 23rd, 2023] [Originally Added On: December 23rd, 2023]
- Superintelligence Unleashed: Navigating the Perils and Promises of Tomorrow's AI Landscape with - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Stand with beneficial AGI | Sharon Gal Or | The Blogs - The Times of Israel [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Top 5 Myths About AI Debunked. Unraveling the Truth: Separating AI | by Michiel Meire | Dec, 2023 - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- What Is Artificial Intelligence (AI)? - Council on Foreign Relations [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- AGI predictions for 2024. The major LLM players, as well as | by Paul Pallaghy, PhD | Dec, 2023 - Medium [Last Updated On: December 31st, 2023] [Originally Added On: December 31st, 2023]
- Meta's long-term vision for AGI: 600,000 x NVIDIA H100-equivalent AI GPUs for future of AI - TweakTown [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Chrome Will Use Experimental AI Feature to Organize Tabs, Write Reviews - CNET [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Meta is joining its two AI teams in pursuit of open-source artificial general intelligence - MediaNama.com [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Tech CEOs Agree Human-Like AI Is Inevitable. Just Don't Ask Them What That Means - The Messenger [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Dream On, Mark Zuckerberg. Your New AI Bet Is a Real Long Shot - CNET [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Demystifying AI: The ABCs of Effective Integration - PYMNTS.com [Last Updated On: January 27th, 2024] [Originally Added On: January 27th, 2024]
- Artificial Intelligence & The Enigma Of Singularity - Welcome2TheBronx [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Former Tesla FSD Leader, Karpathy, Offers Glimpse into the Future of Artificial General Intelligence - Not a Tesla App [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Dembski: Does the Squawk Around AI Sound Like the Tower of Babel? - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI child unveiled by award-winning Chinese scientist in Beijing - South China Morning Post [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- I Asked GPT4 To Predict The Timeline Of Breakthrough Advances In Superintelligence Until The Year - Medium [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AGI: A Big Deal(Baby) That Needs Our Attention - Medium [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Human Intelligence Is Fundamentally Different From Machine Intelligence - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- What is Artificial General Intelligence (AGI) and Why Should You Care? - AutoGPT Official - AutoGPT [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Artificial General Intelligence: Unleashing a New Era of Accessibility for People with Disabilities - Medriva [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Scholar Creates AI-Powered Simulated Child - Futurism [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AGI: Machines vs. Organisms - Discovery Institute [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Meta's AI boss says LLMs not enough: 'Human level AI is not just around the corner' - Cointelegraph [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Understanding the Intersection between AI and Intelligence: Debates, Capabilities, and Ethical Implications - Medriva [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Future of Artificial Intelligence: Predictions and Impact on Society - Medriva [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Amazon AGI Team Say Their AI Is Showing "Emergent Abilities" - Futurism [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- There's AI, and Then There's AGI: What You Need to Know to Tell the Difference - CNET [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Will Artificial Intelligence Increase Unemployment? No. - Reason [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- The Babelian Tower Of AI Alignment - NOEMA - Noema Magazine [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Elon Musk's xAI Close to Raising $6 Billion - PYMNTS.com [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Learning Before Legislating in Texas' AI Advisory Council - dallasinnovates.com [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- The Future of Generative AI: Trends, Challenges, & Breakthroughs - eWeek [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- Drink the Kool-Aid all you want, but don't call AI an existential threat - Bulletin of the Atomic Scientists [Last Updated On: April 30th, 2024] [Originally Added On: April 30th, 2024]
- What Ever Happened to the AI Apocalypse? - New York Magazine [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- What aren't the OpenAI whistleblowers saying? - Platformer [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Former OpenAI researcher foresees AGI reality in 2027 - Cointelegraph [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- What is Artificial Intelligence and Why It Matters in 2024? - Simplilearn [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Employees claim OpenAI, Google ignoring risks of AI and should give them 'right to warn' public - New York Post [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? - The New York Times [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- AGI in Less Than 5 years, Says Former OpenAI Employee - - 99Bitcoins [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Former OpenAI Employee Says Company Had Plan to Start AGI Bidding War With China and Russia - Futurism [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- OpenAI and Googles AI systems are powerful. Where are they taking us? - Vox.com [Last Updated On: June 10th, 2024] [Originally Added On: June 10th, 2024]
- Exploring Safe Superintelligence: Protecting Humanity in the Age of AI - PUNE.NEWS - PUNE.NEWS [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- The role of artificial intelligence in modern society - Business Insider Africa [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Bill Gates Says A.I.'s Impact on the Legal System Could 'Change Justice' - Observer [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Generative AI | potential in attractions industry - blooloop [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- What's the worst that AI can do to our industry (and can we prevent it)? - Unmade: media and marketing analysis [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Hottest AI Use Case For Advisors: Note-Taking Apps That Generate Action Items - Family Wealth Report [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Artificial General Intelligence, Shrinkflation and Snackable are Some of the Many New, Revised, or Updated Words and ... - LJ INFOdocket [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- A Reality Check on Superhuman AI - Nautilus - Nautilus [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- What Would You Do If You Had 8 Years Left to Live? - Uncharted Territories [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- Is It Sinking In? Chatbots Will *Not* Soon Think Like Humans - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: June 21st, 2024] [Originally Added On: June 21st, 2024]
- AGI: The New Fire? Demis Hassabis Predicts AI Revolution - Wall Street Pit [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- ChatGPT Is 4 Levels Away From AGI In OpenAI's Scale - Dataconomy [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- Skild AI grabs $300M to build foundation model for robotics - Robot Report [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- 3 reasons why Artificial General Intelligence is a hallucination - Techpoint Africa [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]
- OpenAI is plagued by safety concerns - The Verge [Last Updated On: July 14th, 2024] [Originally Added On: July 14th, 2024]