As the months of 2024 unfold, we are all part of an extraordinary year for the history of both democracy and technology. More countries and people will vote for their elected leaders than in any year in human history. At the same time, the development of AI is racing ever faster ahead, offering extraordinary benefits but also enabling bad actors to deceive voters by creating realistic deepfakes of candidates and other individuals. The contrast between the promise and peril of new technology has seldom been more striking.
This quickly has become a year that requires all of us who care about democracy to work together to meet the moment.
Today, the tech sector came together at the Munich Security Conference to take a vital step forward. Standing together, 20 companies [1] announced a new Tech Accord to Combat Deceptive Use of AI in 2024 Elections. Its goal is straightforward but critical to combat video, audio, and images that fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders. It is not a partisan initiative or designed to discourage free expression. It aims instead to ensure that voters retain the right to choose who governs them, free of this new type of AI-based manipulation.
The challenges are formidable, and our expectations must be realistic. But the accord represents a rare and decisive step, unifying the tech sector with concrete voluntary commitments at a vital time to help protect the elections that will take place in more than 65 nations between the beginning of March and the end of the year.
While many more steps will be needed, today marks the launch of a genuinely global initiative to take immediate practical steps and generate more and broader momentum.
Its worth starting with the problem we need to solve.New generative AI tools make it possible to create realistic and convincing audio, video, and images that fake or alter the appearance, voice, or actions of people. Theyre often called deepfakes. The costs of creation are low, and the results are stunning. The AI for Good Lab at Microsoft first demonstrated this for me last year when they took off-the-shelf products, spent less than $20 on computing time, and created realistic videos that not only put new words in my mouth, but had me using them in speeches in Spanish and Mandarin that matched the sound of my voice and the movement of my lips.
In reality, I struggle with French and sometimes stumble even in English. I cant speak more than a few words in any other language. But, to someone who doesnt know me, the videos appeared genuine.
AI is bringing a new and potentially more dangerous form of manipulation that weve been working to address for more than a decade, from fake websites to bots on social media. In recent months, the broader public quickly has witnessed this expanding problem and the risks this creates for our elections. In advance of the New Hampshire primary, voters received robocalls that used AI to fake the voice and words of President Biden. This followed the documented release of multiple deepfake videos beginning in December of UK Prime Minister Rishi Sunak. These are similar to deepfake videos the Microsoft Threat Analysis Center (MTAC) has traced to nation-state actors, including a Russian state-sponsored effort to splice fake audio segments into excerpts of genuine news videos.
This all adds up to a growing risk of bad actors using AI and deepfakes to deceive the public in an election. And this goes to a cornerstone of every democratic society in the world the ability of an accurately-informed public to choose the leaders who will govern them.
This deepfake challenge connects two parts of the tech sector. The first is companies that create AI models, applications, and services that can be used to create realistic video, audio, and image-based content. And the second is companies that run consumer services where individuals can distribute deepfakes to the public. Microsoft works in both spaces. We develop and host AI models and services on Azure in our datacenters, create synthetic voice technology, offer image creation tools in Copilot and Bing, and provide applications like Microsoft Designer, which is a graphic design app that enables people easily to create high-quality images. And we operate hosted consumer services including LinkedIn and our Gaming network, among others.
This has given us visibility to the full range of the evolution of the problem and the potential for new solutions. As weve seen the problem grow, the data scientists and engineers in our AI for Good Lab and the analysts in MTAC have directed more of their focus, including with the use of AI, on identifying deepfakes, tracking bad actors, and analyzing their tactics, techniques, and procedures. In some respects, weve seen practices weve long combated in other contexts through the work of our Digital Crimes Unit, including activities that reach into the dark web. While the deepfake challenge will be difficult to defeat, this has persuaded us that we have many tools that we can put to work quickly.
Like many other technology issues, our most basic challenge is not technical but altogether human. As the months of 2023 drew to a close, deepfakes had become a growing topic of conversation in capitals around the world. But while everyone seemed to agree that something needed to be done, too few people were doing enough, especially on a collaborative basis. And with elections looming, it felt like time was running out. That need for a new sense of urgency, as much as anything, sparked the collaborative work that has led to the accord launched today in Munich.
I believe this is an important day, culminating hard work by good people in many companies across the tech sector. The new accord brings together companies from both relevant parts of our industry those that create AI services that can be used to create deepfakes and those that run hosted consumer services where deepfakes can spread. While the challenge is formidable, this is a vital step that will help better protect the elections that will take place this year.
Its helpful to walk through what this accord does, and how well move immediately to implement it as Microsoft.
The accord focuses explicitly on a concretely defined set of deepfake abuses. It addresses Deceptive AI Election Content, which is defined as convincing AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote.
The accord addresses this content abuse through eight specific commitments, and theyre all worth reading. To me, they fall into three critical buckets worth thinking more about:
First, the accords commitments will make it more difficult for bad actors to use legitimate tools to create deepfakes. The first two commitments in the accord advance this goal. In part, this focuses on the work of companies that create content generation tools and calls on them to strengthen the safety architecture in AI services by assessing risks and strengthening controls to help prevent abuse. This includes aspects such as ongoing red team analysis, preemptive classifiers, the blocking of abusive prompts, automated testing, and rapid bans of users who abuse the system. It all needs to be based on strong and broad-based data analysis. Think of this as safety by design.
This also focuses on the authenticity of content by advancing what the tech sector refers to as content provenance and watermarking. Video, audio, and image design products can incorporate content provenance features that attach metadata or embed signals in the content they produce with information about who created it, when it was created, and the product that was used, including the involvement of AI. This can help media organizations and even consumers better separate authentic from inauthentic content. And the good news is that the industry is moving quickly to rally around a common approach the C2PA standard to help advance this.
But provenance is not sufficient by itself, because bad actors can use other tools to strip this information from content. As a result, it is important to add other methods like embedding an invisible watermark alongside C2PA signed metadata and to explore ways to detect content even after these signals are removed or degraded, such as by fingerprinting an image with a unique hash that might allow people to match it with a provenance record in a secure database.
Todays accord helps move the tech sector farther and faster in committing to, innovating in, and adopting these technological approaches. It builds on the voluntary White House commitments first embraced by several companies in the United States this past July and the European Unions Digital Services Acts focus on the integrity of electoral processes. At Microsoft, we are working to accelerate our work in these areas across our products and services. And we are launching next month new Content Credentials as a Service to help support political candidates around the world, backed by a dedicated Microsoft team.
Im encouraged by the fact that, in many ways, all these new technologies represent the latest chapter of work weve been pursuing at Microsoft for more than 25 years. When CD-ROMs and then DVDs became popular in the early 1990s, counterfeiters sought to deceive the public and defraud consumers by creating realistic-looking fake versions of popular Microsoft products.
We responded with an evolving array of increasingly sophisticated anti-counterfeiting features, including invisible physical watermarking, that are the forerunners of the digital protection were advancing today. Our Digital Crimes Unit developed approaches that put it at the global forefront in using these features to protect against one generation of technology fakes. While its always impossible to eradicate any form of crime completely, we can again call on these teams and this spirit of determination and collaboration to put todays advances to effective use.
Second, the accord brings the tech sector together to detect and respond to deepfakes in elections. This is an essential second category, because the harsh reality is that determined bad actors, perhaps especially well-resourced nation-states, will invest in their own innovations and tools to create deepfakes and use these to try to disrupt elections. As a result, we must assume that well need to invest in collective action to detect and respond to this activity.
The third and fourth commitments in todays accord will advance the industrys detection and response capabilities. At Microsoft, we are moving immediately in both areas. On the detection front, we are harnessing the data science and technical capabilities of our AI for Good Lab and MTAC team to better detect deepfakes on the internet. We will call on the expertise of our Digital Crimes Unit to invest in new threat intelligence work to pursue the early detection of AI-powered criminal activity.
We are also launching effective immediately a new web page Microsoft-2024 Elections where a political candidate can report to us a concern about a deepfake of themselves. In essence, this empowers political candidates around the world to aid with the global detection of deepfakes.
We are combining this work with the launch of an expanded Digital Safety Unit. This will extend the work of our existing digital safety team, which has long addressed abusive online content and conduct that impacts children or that promotes extremist violence, among other categories. This team has special ability in responding on a 24/7 basis to weaponized content from mass shootings that we act immediately to remove from our services.
We are deeply committed to the importance of free expression, but we do not believe this should protect deepfakes or other deceptive AI election content covered by todays accord. We therefore will act quickly to remove and ban this type of content from LinkedIn, our Gaming network, and other relevant Microsoft services consistent with our policies and practices. At the same time, we will promptly publish a policy that makes clear our standards and approach, and we will create an appeals process that will move quickly if a user believes their content was removed in error.
Equally important, as addressed in the accords fifth commitment, we are dedicated to sharing with the rest of the tech sector and appropriate NGOs the information about the deepfakes we detect and the best practices and tools we help develop. We are committed to advancing stronger collective action, which has proven indispensable in protecting children and addressing extremist violence on the internet. We deeply respect and appreciate the work that other tech companies and NGOs have long advanced in these areas, including through the Global Internet Forum to Counter Terrorism, or GIFCT, and with governments and civil society under the Christchurch Call.
Third, the accord will help advance transparency and build societal resilience to deepfakes in elections. The final three commitments in the accord address the need for transparency and the broad resilience we must foster across the worlds democracies.
As reflected in the accords sixth commitment, we support the need for public transparency about our corporate and broader collective work. This commitment to transparency will be part of the approach our Digital Safety Unit takes as it addresses deepfakes of political candidates and the other categories covered by todays accord. This will also include the development of a new annual transparency report we will publish that covers our policies and data about how we are applying them.
The accords seventh commitment obliges the tech sector to continue to engage with a diverse set of global civil society organizations, academics, and other subject matter experts. These groups and individuals play an indispensable role in the promotion and protection of the worlds democracies. For more than two centuries, they have been fundamental to the advance of democratic rights and principles, including their critical work to advance the abolition of slavery and the expansion of the right to vote in the United States.
We look forward, as a company, to continued engagement with these groups. When diverse groups come together, we do not always start with the same perspective, and there are days when the conversations can be challenging. But we appreciate from longstanding experience that one of the hallmarks of democracy is that people do not always agree with each other. Yet, when people truly listen to differing views, they almost always learn something new. And from this learning there comes a foundation for better ideas and greater progress. Perhaps more than ever, the issues that connect democracy and technology require a broad tent with room to listen to many different ideas.
This also provides a basis for the accords final commitment, which is support for work to foster public awareness and resilience regarding deceptive AI election content. As weve learned first-hand in recent elections in places as distant from each other as Finland and Taiwan, a savvy and informed public may provide the best defense of all to the risk of deepfakes in elections. One of our broad content provenance goals is to equip people with the ability to look easily for C2PA indicators that will denote whether content is authentic. But this will require public awareness efforts to help people learn where and how to look for this.
We will act quickly to implement this final commitment, including by partnering with other tech companies and supporting civil society organizations to help equip the public with the information needed. Stay tuned for new steps and announcements in the coming weeks.
This is the final question we should all ask as we consider the important step taken today. And, despite my enormous enthusiasm, I would be the first to say that this accord represents only one of the many vital steps well need to take to protect elections.
In part this is because the challenge is formidable. The initiative requires new steps from a wide array of companies. Bad actors likely will innovate themselves, and the underlying technology is continuing to change quickly. We need to be hugely ambitious but also realistic. Well need to continue to learn, innovate, and adapt. As a company and an industry, Microsoft and the tech sector will need to build upon todays step and continue to invest in getting better.
But even more importantly, there is no way the tech sector can protect elections by itself from this new type of electoral abuse. And, even if it could, it wouldnt be proper. After all, were talking about the election of leaders in a democracy. And no one elected any tech executive or company to lead any country.
Once one reflects for even a moment on this most basic of propositions, its abundantly clear that the protection of elections requires that we all work together.
In many ways, this begins with our elected leaders and the democratic institutions they lead. The ultimate protection of any democratic society is the rule of law itself. And, as weve noted elsewhere, its critical that we implement existing laws and support the development of new laws to address this evolving problem. This means the world will need new initiatives by elected leaders to advance these measures.
Among other areas, this will be essential to address the use of AI deepfakes by well-resourced nation-states. As weve seen across the cybersecurity and cyber-influence landscapes, a small number of sophisticated governments are putting substantial resources and expertise into new types of attacks on individuals, organizations, and even countries. Arguably, on some days, cyberspace is the space where the rule of law is most under threat. And well need more collective inter-governmental leadership to address this.
As we look to the future, it seems to those of us who work at Microsoft that well also need new forms of multistakeholder action. We believe that initiatives like the Paris Call and Christchurch Call have had a positive impact on the world precisely because they have brought people together from governments, the tech sector, and civil society to work on an international basis. As we address not only deepfakes but almost every other technology issue in the world today, we find it hard to believe that any one part of society can solve a big problem by acting alone.
This is why its so important that todays accord recognizes explicitly that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders.
Perhaps more than anything, this needs to be our North Star.
Only by working together can we preserve timeless values and democratic principles in a time of enormous technological change.
[1] Adobe, Amazon, Anthropic, ARM, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, TruePic, and X.
Tags: AI for Good, artificial intelligence, cyber influence, cybersecurity, deepfakes, Defending Democracy Program, Microsoft Designer, MTAC, Responsible AI, Tech Accord to Combat Deceptive Use of AI in 2024 Elections
Follow this link:
- What We Learned From Big Tech's Earnings Reports - Investopedia [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Google Maps: It's getting a new generative AI feature - Mashable [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Amazon made an AI bot to talk you through buying more stuff on Amazon - The Verge [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI creates what Europeans think Americans from every state look like and it may hurt your feelings - UNILAD [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Samsung's Galaxy S24 Ultra Could Be Doing So Much More With AI - CNET [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Fact Sheet: Biden-Harris Administration Announces Key AI Actions Following President Bidens Landmark Executive ... - The White House [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Google Maps is getting supercharged with generative AI - The Verge [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI chatbots tend to choose violence and nuclear strikes in wargames - New Scientist [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Police Turn to AI to Review Bodycam Footage - ProPublica [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Apple Just Teased Its AI Plans. You Really Should Take Notice - CNET [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- In the AI science boom, beware: your results are only as good as your data - Nature.com [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Tim Cook confirms Apple's generative AI features are coming later this year - The Verge [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- I Tried Google Bard's New AI Image Generator. Here's How It Turned Out - CNET [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- 'Year of AI' Faculty Recruitment Initiative Aims to Bring More World-Class Professors to UT - The University of Texas at Austin [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI Learns Through the Eyes and Ears of a Child - New York University [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Amazon Introduces Rufus, an AI Shopping Tool, and Reports Earnings - The New York Times [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- I Tested a Next-Gen AI Assistant. It Will Blow You Away - WIRED [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI afterlife, robot romance, and slow-burn slashers: the best of Sundance 2024 - The Verge [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Is Jumping on the AI Bandwagon Prudent? - Catholic Exchange [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- This AI learnt language by seeing the world through a baby's eyes - Nature.com [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Arc Search's AI responses launched as an unfettered experience with no guardrails - Mashable [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Generative AI is hot, but predictive AI remains the workhorse - CIO [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Nvidia Stock Just Got Amazing Artificial Intelligence (AI) News From These Trillion-Dollar Tech Giants - The Motley Fool [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI Briefing: How Priceline and other e-commerce companies are approaching generative AI - Digiday [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- OpenAI Unveils A.I. That Instantly Generates Eye-Popping Videos - The New York Times [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Technology industry to combat deceptive use of AI in 2024 elections - Stories - Microsoft [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- These Are the Jobs That AI Is Actually Replacing in 2024 - Tech.co [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- AI company developing software to detect hypersonic missiles from space - SpaceNews [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- How are AI Systems Assisting Architects and Designers? - ArchDaily [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Artificial intelligence is making critical health care decisions. The sheriff is MIA - POLITICO [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Google's Chess Experiments Reveal How to Boost the Power of AI - WIRED [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Why the only way to ride the company AI wave is experimentation - Big Think [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- What Are the Best AI Stocks in February 2024? Our Top 3 Picks - InvestorPlace [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Media Buying Briefing: Agencies' AI efforts lead to aliens and Whoppers - Digiday [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- C3.ai Stock Warning: Don't Get Carried Away With AI Euphoria! - InvestorPlace [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- The State of A.I., and Will Perplexity Beat Google or Destroy the Web? - The New York Times [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Donald Trump's father resurrected by AI to tell him he's 'a disgrace' - Euronews [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Google Cloud CEO On Huge Investments, AI And Challenges In 2024 - CRN [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Another Big Question About AI: Its Carbon Footprint Mother Jones - Mother Jones [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Reddit sells training data to unnamed AI company ahead of IPO - Ars Technica [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- ChatGPT Stock Predictions: 3 Artificial Intelligence Companies the AI Bot Thinks Have 10X Potential - InvestorPlace [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Chinese entrepreneurs express awe and fear of OpenAIs Sora video tool - South China Morning Post [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Google's AI Boss Says Scale Only Gets You So Far - WIRED [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- World's largest computer chip WSE-3 will power massive AI supercomputer 8 times faster than the current record-holder - Livescience.com [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Is generative AI truly making disinformation worse? - Euronews [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Your Kid May Already Be Watching AI-Generated Videos on YouTube - WIRED [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Free Legal Research Startup descrybe.ai Now Has AI Summaries of All State Supreme and Appellate Opinions - LawSites [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Google's new AI will play video games with you but not to win - The Verge [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Regulators Need AI Expertise. They Can't Afford It - WIRED [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- CBP wants to use AI to scan for fentanyl at the border - The Verge [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Rely on the Spirit when using AI, Elder Gong encourages - Church News [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Video Game Made Purely With AI Failed Because Tech Was 'Unable to Replace Talent' - IGN [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Among the A.I. Doomsayers - The New Yorker [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Self-docking spacecraft could be built with AI system similar to ChatGPT - Space.com [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- AI books are crowding the marketplace on Amazon - NPR [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Hackers can read private AI-assistant chats even though they're encrypted - Ars Technica [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- EU Presses Big Tech Companies on AI Threats - PYMNTS.com [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- AI fear and excitement are lucrative mix for online training industry - Marketplace [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Craig Martell, the Pentagon's first-ever Chief Digital and AI Officer, to depart in April - DefenseScoop [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Startup Interloom raises $3 million seed round to take on UiPath and RPA market - Fortune [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Forget Chatbots. AI Agents Are the Future - WIRED [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- SXSW audience boos AI sizzle reel - Quartz [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Look beyond Nvidia to ride the AI wave there are other potential winners, Fidelity says - CNBC [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Sony Pictures Will Cut Film Costs 'Using AI, Primarily' - IndieWire [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Anthropic's AI now lets you create bots to work for you - The Verge [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- VCs are selling shares of hot AI companies like Anthropic and xAI to small investors in a wild SPV market - TechCrunch [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Google Eats Rocks, a Win for A.I. Interpretability and Safety Vibe Check - The New York Times [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Lenovo and Cisco Announce Strategic Partnership to Simplify Path to AI Innovation - Cisco Newsroom [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- This Week in AI: Can we (and could we ever) trust OpenAI? - TechCrunch [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Microsoft AI screenshots everything you do on your computer and privacy experts are concerned - New York Post [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Tribeca to Screen AI-Generated Short Films Created by OpenAI's Sora - IndieWire [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Report: Apple and OpenAI have signed a deal to partner on AI - Ars Technica [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- An image calling for 'All Eyes on Rafah' is going viral. But it seems AI-generated. - The Washington Post [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Viral 'All Eyes on Rafah' Post Prompts More AI Images - TIME [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Research vs. development: Where is the moat in AI? - VentureBeat [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- All the Apple AI features we're expecting to be announced in iOS 18 - 9to5Mac [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Prediction: This "Magnificent Seven" Artificial Intelligence (AI) Stock Could Be a Better Investment Than Nvidia Over the ... - The Motley... [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Meta's AI is summarizing some bizarre Facebook comment sections - The Verge [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- ElevenLabs' AI generator makes explosions or other sound effects with just a prompt - The Verge [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Google defends AI search results after they told us to put glue on pizza - The Verge [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]