The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.
Without tackling existing flaws within our current ecosystem, we are creating a fertile ground where disinformation and other harmful phenomena will continue to flourish, Kalim Ahmed writes.
In recent years, AI has dominated discussions across various sectors.
Its swift integration and growth outpace the general public's ability to keep up with the developments.
This has forced journalists, researchers, and civil society organisations to continuously highlight the drawbacks of these emerging technologies, and they are right in doing so.
The discussion was sparked by the latest release of Sora, a text-to-video AI model, with many debating whether tools like these have any beneficial applications or an inherently negative impact on the world.
Some opted for a more measured approach and sought to investigate the dataset utilized in training models like Sora.They decided to run the prompts demoed by OpenAI via Midjourney, a contemporary of Sora, and discovered striking similarities between the outputs of both models.
A tech columnist uncovered that a hyperrealistic clip generated by Sora, which gained significant traction on social media, was trained using footage from Shutterstock, a company partnered with OpenAI. This highlights the limited availability of high-quality datasets for training such models.
One of the significant concerns that repeatedly arise in the public discourse is the harmful impacts of AI-assisted disinformation campaigns, particularly given the widespread availability of tools for creating deceptive content such as images, text, deep fakes, and voice cloning.
The overarching concern is that these tools enable the production of misleading content at scale, potentially skewing public perceptions and even influencing behaviours.
However, excessively highlighting the potential of AI-assisted disinformation shifts attention away from two important realities.
Firstly, traditional disinformation methods continue to be effective, and threat actors might exploit today's AI models to propagate disinformation not solely due to its sophistication, but rather because our current infrastructures are flawed, enabling disinformation to flourish regardless of AI involvement.
Secondly, the application of generative AI may not necessarily be useful enough for disinformation but rather extends to other malicious activities that are being overlooked because of our singular focus on disinformation.
The excessive attention given to the dangers of AI-enabled disinformation has somewhat reached the levels of science fiction.
This is largely due to the way public discourse, particularly in popular news media, tends to sensationalize the topic rather than approaching it from a grounded, realistic perspective.
A prominent case worth examining is the incident from last year alleging a blast near the Pentagon, supported by an AI-generated image purportedly depicting the event.
The initial assertions originated from unreliable sources such as RT and were swiftly amplified by television outlets in India.
One could argue that this is a successful demonstration of the harms of AI-assisted disinformation, however, upon closer examination, this serves as a case in point highlighting a flaw within our current information infrastructure.
The AI-generated image employed to substantiate the claim was of low quality; a more convincing counterfeit image could be produced using tools like Adobe Photoshop, potentially causing more significant harm.
Has AI decreased the time needed for malicious actors to generate false information? Undoubtedly. However, the counterfeit image would have still spread rapidly as it was disseminated by premium users on X (formerly Twitter).
For the public and many in traditional media, grasping the complete transformation of the platform is challenging; there's no meaning of a verified check mark. It's now simply premium and non-premium users.Relying on the blue check mark to swiftly gauge the authenticity of a claim is obsolete.
Furthermore, when traditional television news outlets disseminated the claim, they disregarded fundamental principles of media literacy.
Since the assertion was being propagated by RT, a mouthpiece of the Kremlin engaged in conflict with Ukraine (a nation backed by the US and its allies), this contextual awareness needed additional verification.
However, the visuals were promptly showcased on television screens across India without undergoing any form of cross-verification.
There have been numerous instances of disinformation campaigns orchestrated by pro-Kremlin actors that don't rely on AI.
The Microsoft Threat Analysis Center (MTAC) uncovered one such campaign where celebrity Cameo videos were manipulated to falsely depict Ukrainian President Volodymyr Zelenskyy as a drug addict.
Celebrities were paid to deliver messages to an individual named "Vladimir," urging him to seek assistance for substance abuse.
Subsequently, these videos were doctored to include links, emojis, and logos, creating an illusion of authenticity, as if they were originally posted on social media.
These videos were then covered by Russian state-affiliated news agencies, including RIA Novosti, Sputnik, and Russia-24.
Repeatedly, disinformation campaigns orchestrated by unidentified pro-Russia actors have sought to mimic mainstream media outlets to disseminate anti-Ukraine narratives.
They employ fabricated news article screenshots as well as recaps of news videos to achieve this. Neither of these methods necessitates AI; rather, they rely on traditional techniques like skilful manipulation using software such as Adobe Photoshop and Adobe After Effects.
The uproar surrounding AI-driven disinformation also serves to protect Big Tech companies from being held accountable.
To put it plainly, an AI-generated image, text, audio, or video serves little purpose without a mechanism to disseminate it to a wide audience.
Investigative reporting has continuously shed light on the deficiencies within the advertising policies regulating Meta platforms (Facebook, Instagram, Audience Network, and Messenger).
Last year, a report revealed a network of crypto scam ads operating on Meta platforms leveraging the images of celebrities.
These advertisements did not employ sophisticated AI; rather, they utilized manipulated celebrity images and exploited a flaw in Meta's ad policies.
These policies allowed URLs of reputable news outlets to be displayed, but upon clicking, users were redirected to crypto scam websites.This deceptive tactic has been termed a "bait and switch".
Similarly, a Russian disinformation campaign linked to its military intelligence has been using images of celebrities, including Taylor Swift, Beyonc, Oprah, Justin Bieber, and Cristiano Ronaldo, to promote anti-Ukraine messages on social media by exploiting vulnerabilities in Meta and X's advertising policies.
These loopholes represent just one of the numerous flaws within our current system. Given the demonstrated efficacy of political advertising through Big Tech, we must prioritize addressing such flaws.
As we enter into the biggest election year to date (involving a whopping 45% of the world population), it is becoming apparent that Big Tech companies are beginning to retract some of their policies regarding misinformation and political advertising.
In a major policy move last year, YouTube announced that it would no longer moderate misleading claims such as those that the 2020 presidential election was stolen from Trump, highlighting the continuing challenge of how to address misinformation and disinformation at scale, particularly on video-based platforms.
Even Meta, which has typically touted its third-party fact-checking initiative, recently introduced new controls. Users now have the option to determine whether they want fact-checked posts to be displayed prominently on their feed (accompanied by a fact-checked label) or to be pushed further down based on their preference.
However, this policy is not without its flaws. For instance, if an influential figure encourages all their followers to adjust their settings to prioritize fact-checked posts, it could potentially make the entire purpose of fact-checking useless.
After all, it is not the first time public personalities have attempted to game algorithms using their following.
These major policy moves are happening in the US, where Big Tech companies are much more rigorous with their election interference and manipulation policies.
It should be noted that universally, Meta does not extend its fact-checking policies to political advertisements or posts made by politicians, which is a major policy issue that has been criticized for years.
It's generally agreed by experts in the field of disinformation that these companies tend not to be as meticulous with third-world countries, with some using the term "step-child treatment" to describe this phenomenon.
So it begs the question: if their policies were to have leaks or insufficient measures in place, what impact would this have on these countries?
A linear approach to disinformation i.e., assuming that fact-checking is simply going to solve the problem, is an intellectually dishonest measure.
Even with potential flaws, these fact-checking initiatives supported by Meta and Google have gained global momentum and should continue to receive support.
However, this does not mean we should become content with what we have achieved and stop identifying other societal variables that drive disinformation.
Additionally, for individuals vulnerable to misinformation and disinformation, the determining factor is not necessarily the quality of the content, such as sophisticated AI-generated images and videos, but rather the overarching narrative that these contents endorse.
For example, a deepfake video recently went viral showing an alleged France 24 broadcast in which a presenter announces President Emmanuel Macron has cancelled a scheduled visit to Ukraine over fears of an assassination attempt.
The broadcast was never aired by France24 but it was already significantly viral in the Russian information sphere that former Russian president and deputy chair of the Security Council of Russia Dmitry Medvedev was quick to share his opinion on X about the alleged news report.
The so-called piece of information about the assassination attempt on Macron couldve been easily cross-verified by reading reports of other reputable news outlets, yet the authenticity of it did not matter to those who were already naive.
For disinformation producers, it's a binary classification challenge, much like how scammers, masquerading as Nigerian Princes, differentiate between susceptible and unsusceptible users, allowing them to target sufficient victims for profit.
Furthermore, when examining the impact of AI-enabled disinformation, it becomes evident that beyond disseminating false information, the predominant harm has been observed in the proliferation of non-consensual intimate images (NCII) and scams.
This is where bad-faith actors have identified the most impactful application of generative AI thus far.
Moreover, when considering foreign influence in local elections, the hacking of the Clinton campaign prior to the 2016 US Elections proved to be considerably advantageous for bad-faith actors.
They exploited an already vulnerable information environment in which the trust in traditional media appeared to diminish.
Russian hacking groups accessed emails from the Clinton campaign, subsequently sharing them with WikiLeaks, which then released the stolen emails in the lead-up to the November election. This triggered a series of detrimental news cycles for Clinton.
Threat actors were able to achieve their goals simply by exploiting our existing vulnerabilities.
The question now arises, can malicious actors leverage AI models for cyber activities akin to the 2016 Clinton campaign hacking?
The research conducted collaboratively by OpenAI and Microsoft Threat Intelligence in February of this year revealed efforts by state-affiliated threat actors to exploit these models, resulting in the disruption of five such state-affiliated malicious actors.
The bottom line is, while we can't underplay the challenges posed by AI, we must do so with a clear grasp of reality.
This means tackling existing flaws, including shortcomings in Big Tech policies, declining trust in the media, and other risks associated with emerging technologies.
Without addressing these issues within our current ecosystem, we are creating a fertile ground where disinformation and other harmful phenomena will continue to flourish.
Kalim Ahmed is a digital investigator focusing on disinformation and influence operations.
At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation.
Read the rest here:
Is generative AI truly making disinformation worse? - Euronews
- What We Learned From Big Tech's Earnings Reports - Investopedia [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Google Maps: It's getting a new generative AI feature - Mashable [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Amazon made an AI bot to talk you through buying more stuff on Amazon - The Verge [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI creates what Europeans think Americans from every state look like and it may hurt your feelings - UNILAD [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Samsung's Galaxy S24 Ultra Could Be Doing So Much More With AI - CNET [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Fact Sheet: Biden-Harris Administration Announces Key AI Actions Following President Bidens Landmark Executive ... - The White House [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Google Maps is getting supercharged with generative AI - The Verge [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI chatbots tend to choose violence and nuclear strikes in wargames - New Scientist [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Police Turn to AI to Review Bodycam Footage - ProPublica [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Apple Just Teased Its AI Plans. You Really Should Take Notice - CNET [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- In the AI science boom, beware: your results are only as good as your data - Nature.com [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Tim Cook confirms Apple's generative AI features are coming later this year - The Verge [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- I Tried Google Bard's New AI Image Generator. Here's How It Turned Out - CNET [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- 'Year of AI' Faculty Recruitment Initiative Aims to Bring More World-Class Professors to UT - The University of Texas at Austin [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI Learns Through the Eyes and Ears of a Child - New York University [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Amazon Introduces Rufus, an AI Shopping Tool, and Reports Earnings - The New York Times [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- I Tested a Next-Gen AI Assistant. It Will Blow You Away - WIRED [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI afterlife, robot romance, and slow-burn slashers: the best of Sundance 2024 - The Verge [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Is Jumping on the AI Bandwagon Prudent? - Catholic Exchange [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- This AI learnt language by seeing the world through a baby's eyes - Nature.com [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Arc Search's AI responses launched as an unfettered experience with no guardrails - Mashable [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Generative AI is hot, but predictive AI remains the workhorse - CIO [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- Nvidia Stock Just Got Amazing Artificial Intelligence (AI) News From These Trillion-Dollar Tech Giants - The Motley Fool [Last Updated On: February 4th, 2024] [Originally Added On: February 4th, 2024]
- AI Briefing: How Priceline and other e-commerce companies are approaching generative AI - Digiday [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- OpenAI Unveils A.I. That Instantly Generates Eye-Popping Videos - The New York Times [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Technology industry to combat deceptive use of AI in 2024 elections - Stories - Microsoft [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Meeting the moment: combating AI deepfakes in elections through today's new tech accord - Microsoft On the Issues - Microsoft [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- These Are the Jobs That AI Is Actually Replacing in 2024 - Tech.co [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- AI company developing software to detect hypersonic missiles from space - SpaceNews [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- How are AI Systems Assisting Architects and Designers? - ArchDaily [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Artificial intelligence is making critical health care decisions. The sheriff is MIA - POLITICO [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Google's Chess Experiments Reveal How to Boost the Power of AI - WIRED [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Why the only way to ride the company AI wave is experimentation - Big Think [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- What Are the Best AI Stocks in February 2024? Our Top 3 Picks - InvestorPlace [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Media Buying Briefing: Agencies' AI efforts lead to aliens and Whoppers - Digiday [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- C3.ai Stock Warning: Don't Get Carried Away With AI Euphoria! - InvestorPlace [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- The State of A.I., and Will Perplexity Beat Google or Destroy the Web? - The New York Times [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Donald Trump's father resurrected by AI to tell him he's 'a disgrace' - Euronews [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Google Cloud CEO On Huge Investments, AI And Challenges In 2024 - CRN [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Another Big Question About AI: Its Carbon Footprint Mother Jones - Mother Jones [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Reddit sells training data to unnamed AI company ahead of IPO - Ars Technica [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- ChatGPT Stock Predictions: 3 Artificial Intelligence Companies the AI Bot Thinks Have 10X Potential - InvestorPlace [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Chinese entrepreneurs express awe and fear of OpenAIs Sora video tool - South China Morning Post [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- Google's AI Boss Says Scale Only Gets You So Far - WIRED [Last Updated On: February 20th, 2024] [Originally Added On: February 20th, 2024]
- World's largest computer chip WSE-3 will power massive AI supercomputer 8 times faster than the current record-holder - Livescience.com [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Your Kid May Already Be Watching AI-Generated Videos on YouTube - WIRED [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Free Legal Research Startup descrybe.ai Now Has AI Summaries of All State Supreme and Appellate Opinions - LawSites [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Google's new AI will play video games with you but not to win - The Verge [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Regulators Need AI Expertise. They Can't Afford It - WIRED [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- CBP wants to use AI to scan for fentanyl at the border - The Verge [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Rely on the Spirit when using AI, Elder Gong encourages - Church News [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Video Game Made Purely With AI Failed Because Tech Was 'Unable to Replace Talent' - IGN [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Among the A.I. Doomsayers - The New Yorker [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Self-docking spacecraft could be built with AI system similar to ChatGPT - Space.com [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- AI books are crowding the marketplace on Amazon - NPR [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Hackers can read private AI-assistant chats even though they're encrypted - Ars Technica [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- EU Presses Big Tech Companies on AI Threats - PYMNTS.com [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- AI fear and excitement are lucrative mix for online training industry - Marketplace [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Craig Martell, the Pentagon's first-ever Chief Digital and AI Officer, to depart in April - DefenseScoop [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Startup Interloom raises $3 million seed round to take on UiPath and RPA market - Fortune [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Forget Chatbots. AI Agents Are the Future - WIRED [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- SXSW audience boos AI sizzle reel - Quartz [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Look beyond Nvidia to ride the AI wave there are other potential winners, Fidelity says - CNBC [Last Updated On: March 15th, 2024] [Originally Added On: March 15th, 2024]
- Sony Pictures Will Cut Film Costs 'Using AI, Primarily' - IndieWire [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Anthropic's AI now lets you create bots to work for you - The Verge [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- VCs are selling shares of hot AI companies like Anthropic and xAI to small investors in a wild SPV market - TechCrunch [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Google Eats Rocks, a Win for A.I. Interpretability and Safety Vibe Check - The New York Times [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Lenovo and Cisco Announce Strategic Partnership to Simplify Path to AI Innovation - Cisco Newsroom [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- This Week in AI: Can we (and could we ever) trust OpenAI? - TechCrunch [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Microsoft AI screenshots everything you do on your computer and privacy experts are concerned - New York Post [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Tribeca to Screen AI-Generated Short Films Created by OpenAI's Sora - IndieWire [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Report: Apple and OpenAI have signed a deal to partner on AI - Ars Technica [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- An image calling for 'All Eyes on Rafah' is going viral. But it seems AI-generated. - The Washington Post [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Viral 'All Eyes on Rafah' Post Prompts More AI Images - TIME [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Research vs. development: Where is the moat in AI? - VentureBeat [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- All the Apple AI features we're expecting to be announced in iOS 18 - 9to5Mac [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Prediction: This "Magnificent Seven" Artificial Intelligence (AI) Stock Could Be a Better Investment Than Nvidia Over the ... - The Motley... [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Meta's AI is summarizing some bizarre Facebook comment sections - The Verge [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- ElevenLabs' AI generator makes explosions or other sound effects with just a prompt - The Verge [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]
- Google defends AI search results after they told us to put glue on pizza - The Verge [Last Updated On: June 2nd, 2024] [Originally Added On: June 2nd, 2024]