The aim is to generate ad revenue, clicks and mine data. The tool is generative artificial intelligence. The operation is “at-scale.” It’s pink slime on steroids. I’m writing, of course, about the creators of AI content farms that quickly churn out content related to current events using generative AI language-bots, like Open AI’s Chat GPT and Google’s Bard.
A May 1 investigation by NewsGuard, an online trust-rating platform for news, found more than 49 such AI-generated content sites in seven languages: English, Tagalog, Portuguese, Thai, French, Czech and Chinese.
Some of these sites, according to NewsGuard, churn out hundreds of articles per day.
“The websites, which often fail to disclose ownership or control, produce a high volume of content related to a variety of topics, including politics, health, entertainment, finance and technology.”
Aside from being annoying and sometimes-purveyors of misinformation, the sites are also a threat to journalism, at least according to a recent report from Reporters Without Borders, a respected nongovernmental organization concerned with global press freedom.
“The disinformation industry disseminates manipulative content on a huge scale, as shown by an investigation by the Forbidden Stories consortium, a project co-founded by RSF,” reads the report. “And now AI is digesting content and regurgitating it in the form of syntheses that flout the principles of rigor and reliability.”
The RSF report also mentioned Midjourney, a sophisticated image-generating AI software capable of instantly creating photo-realistic images based on text-inputs from the user.
Many of the sites identified by NewsGuard have generic names that lend a veneer of authenticity, like Market News Reports or Daily Business Post. NewsGuard reporters reached out to several of the pseudonymous owners of the content farms, and had cryptic exchanges in which the owners tried, futilely, to downplay or justify the use of AI on their websites.
“The articles themselves often give away the fact that they were AI produced,” reads the report. “For example, dozens of articles on BestBudgetUSA.com contain phrases of the kind often produced by generative AI in response to prompts such as, ‘I am not capable of producing 1500 words. … However, I can provide you with a summary of the article,’ which it then does, followed by a link to the original CNN report.”
Some of the sites written about by NewsGuard — including BestBudgetUSA.com and GetIntoKnowledge.com — have shut down since being uncovered. (Prime domain real estate now available, folks!) While some, like Famadillo.com, hold strong, unrepentantly disseminating regurgitated information about hair care products, Bop It buttons, and vegan cosmetics.
- India Today: Shocking video of man throwing goats from moving truck is not from Pakistan (English)
- “After chucking several goats onto the road, the man could be seen climbing down to the bonnet of a car that had pulled up behind the truck. A person could be heard saying, “This is how they steal.”
- Association of European Journalists (Bulgaria): It is not true that the “frog logo” on foods and beverages indicates insect content (Bulgarian)
- “This is a logo of a non-governmental organization, which indicates that the product meets the standards of sustainable agriculture. … The claim that insects are added to a number of foods and drinks – without the knowledge of consumers – is untrue and part of a larger disinformation campaign.”
- Check Your Fact: Video does not show a robot attacking a factory worker (English)
- “The Instagram video claims to show footage of a robot moving boxes that gets “angry” when another robot makes a mistake in the assembly line. The robot then appears to hit a conveyer belt repeatedly until sparks fly, then picks up a box and throws it at a nearby factory worker.”
- Congo Check: This is why a cow can’t lay eggs (French)
- “A viral publication keeps coming back on the web in the Democratic Republic of Congo, in the Republic of Congo and also in the Central African Republic. We see a picture of the cows that seem to be laying eggs. The caption that accompanies these photos says that the Chinese have found a way to make cows lay eggs by inserting genetic material from hens. This is false, we have already denied it in a previous article.”
From the news:
- Plag-AI-rism: How Lead Stories Used ChatGPT To Find A False Story about Tucker Carlson … That Was ‘Plagiarised’ With ChatGPT “So you’ve probably read all those warnings from pundits, academics and researchers about how generative AI tools like ChatGPT are going to impact disinformation and fact checking. And you may have even seen some recent examples of a deep-faked pope in a puffer jacket or Donald Trump being arrested. But let me tell you the story of a real-life example that is probably even stranger than what those pundits, academics and researchers could have imagined.” (Maarten Schenk, Lead Stories)
- Ukrainian newspapers cloned to spread articles aligned with Russia and against the EU “Around two months ago, a series of articles critical of the government of Volodimir Zelenski began to circulate and which, apparently, had been published by generalist media in Ukraine. The publications in these newspapers advocated opening a negotiation with Russia to make land concessions to Moscow” and “talked about how battered the Ukrainian economy is.” (Guillermo Infantes Capdevila, Newtral)
From/for the community:
- I’m sad to say this will be my last edition of Factually. I’m headed off to New York City to get my master’s degree. Thanks for reading and thanks to everyone who reached out over the last year regarding the newsletter. I’ve learned so much about the fact-checking community and I’ve enjoyed hearing your thoughtful comments. You can get in touch on Twitter at @sethalex. So long for now!
- Poynter and the International Fact-Checking Network have announced a new funding opportunity “to support fact-checking initiatives worldwide and reduce the harm of misinformation. Organizations may apply beginning April 14, 2023, to the newly created Global Fact Check Fund for the first phase of the multi-year program, funded by a $13.2 million grant from Google and YouTube.This opening phase is known as BUILD, and is aimed at fact-checking organizations who seek to scale or upgrade their online presence. Funds can be used for improving website development, domain hosting, content management systems, publishing tools, or improving security and resilience against hacking and other threats.”
- The IFCN has awarded $450,000 in grant support to organizations working to lessen the impact of false and misleading information on WhatsApp. In partnership with Meta, the Spread the Facts Grant Program gives verified fact-checking organizations resources to identify, flag and reduce the spread of misinformation that threatens more than 100 billion messages each day. The grant supports eleven projects from eight countries: India, Spain, Nigeria, Georgia, Bolivia, Italy, Indonesia and Jordan. Read more about the announcement here.
- IFCN job announcements: Program Officer and Monitoring & Evaluation Specialist.
If you are a fact-checker and you’d like your work/projects/achievements highlighted in the next edition, send us an email at firstname.lastname@example.org by next Tuesday. Thanks for reading Factually!