People walk past The New York Times building in New York City.
Andrew Burton | Getty Images
Newsroom leaders are preparing for chaos as they consider guardrails to protect their content against artificial intelligence-driven aggregation and disinformation.
The New York Times and NBC News are among the organizations holding preliminary talks with other media companies, large technology platforms and Digital Content Next, the industry’s digital news trade organization, to develop rules around how their content can be used by natural language artificial intelligence tools, according to people familiar with the matter.
The latest trend ā generative AI ā can create seemingly novel blocks of text or images in response to complex queries such as “Write an earnings report in the style of poet Robert Frost” or “Draw a picture of the iPhone as rendered by Vincent Van Gogh.”
Some of these generative AI programs, such as Open AI’s ChatGPT and Google’s Bard, are trained on large amounts of publicly available information from the internet, including journalism and copyrighted art. In some cases, the generated material is actually lifted almost verbatim from these sources.
Publishers fear these programs could undermine their business models by publishing repurposed content without credit and creating an explosion of inaccurate or misleading content, decreasing trust in news online.
Digital Content Next, which represents more than 50 of the largest U.S. media organizations including The Washington Post and The Wall Street Journal parent News Corp., this week published seven principles for “Development and Governance of Generative AI.” They address issues around safety, compensation for intellectual property, transparency, accountability and fairness.
The principles are meant to be an avenue for future discussion. They include: “Publishers are entitled to negotiate for and receive fair compensation for use of their IP” and “Deployers of GAI systems should be held accountable for system outputs” rather than industry-defining rules. Digital Content Next shared the principles with its board and relevant committees Monday.
News outlets contend with A.I.
Digital Content Next’s “Principles for Development and Governance of Generative AI”:
- Developers and deployers of GAI must respect creators’ rights to their content.
- Publishers are entitled to negotiate for and receive fair compensation for use of their IP.
- Copyright laws protect content creators from the unlicensed use of their content.
- GAI systems should be transparent to publishers and users.
- Deployers of GAI systems should be held accountable for system outputs.
- GAI systems should not create, or risk creating, unfair market or competition outcomes.
- GAI systems should be safe and address privacy risks.
The urgency behind building a system of rules and standards for generative AI is intense, said Jason Kint, CEO of Digital Content Next.
“I’ve never seen anything move from emerging issue to dominating so many workstreams in my time as CEO,” said Kint, who has led Digital Content Next since 2014. “We’ve had 15 meetings since February. Everyone is leaning in across all types of media.”
How generative AI will unfold in the coming months and years is dominating media conversation, said Axios CEO Jim VandeHei.
“Four months ago, I wasn’t thinking or talking about AI. Now, it’s all we talk about,” VandeHei said. “If you own a company and AI isn’t something you’re obsessed about, you’re nuts.”
Lessons from the past
Generative AI presents both potential efficiencies and threats to the news business. The technology can create new content ā such as games, travel lists and recipes ā that provide consumer benefits and help cut costs.
But the media industry is equally concerned about threats from AI. Digital media companies have seen their business models flounder in recent years as social media and search firms, primarily Google and Facebook, reaped the rewards of digital advertising. Vice declared bankruptcy last month, and news site BuzzFeed shares have traded under $1 for more than 30 days and the company has received a notice of delisting from the Nasdaq Stock Market.
Against that backdrop, media leaders such as IAC Chairman Barry Diller and News Corp. CEO Robert Thomson are pushing Big Tech companies to pay for any content they use to train AI models.
“I am still astounded that so many media companies, some of them now fatally holed beneath the waterline, were reluctant to advocate for their journalism or for the reform of an obviously dysfunctional digital ad market,” Thomson said during his opening remarks at the International News Media Association’s World Congress of News Media in New York on May 25.
During an April Semafor conference in New York, Diller said the news industry has to band together to demand payment, or threat to sue under copyright law, sooner rather than later.
“What you have to do is get the industry to say you cannot scrape our content until you work out systems where the publisher gets some avenue towards payment,” Diller said. “If you actually take those [AI] systems, and you don’t connect them to a process where there’s some way of getting compensated for it, all will be lost.”
Fighting disinformation
Beyond balance sheet concerns, the most important AI concern for news organizations is alerting users to what’s real and what isn’t.
“Broadly speaking, I’m optimistic about this as a technology for us, with the big caveat that the technology poses huge risks for journalism when it comes to verifying content authenticity,” said Chris Berend, the head of digital at NBC News Group, who added he expects AI will work alongside human beings in the newsroom rather than replace them.
There are already signs of AI’s potential for spreading misinformation. Last month, a verified Twitter account called “Bloomberg Feed” tweeted a fake photograph of an explosion at the Pentagon outside Washington, D.C. While this photo was quickly debunked as fake, it led to a brief dip in stock prices. More advanced fakes could create even more confusion and cause unnecessary panic. They could also damage brands. “Bloomberg Feed” had nothing to do with the media company, Bloomberg LP.
“It’s the beginning of what is going to be a hellfire,” VandeHei said. “This country is going to see a mass proliferation of mass garbage. Is this real or is this not real? Add this to a society already thinking about what is real or not real.”
The U.S. government may regulate Big Tech’s development of AI, but the pace of regulation will probably lag the speed with which the technology is used, VandeHei said.
This country is going to see a mass proliferation of mass garbage. Is this real or is this not real? Add this to a society already thinking about what is real or not real.
Jim VandeHei
CEO of Axios
Technology companies and newsrooms are working to combat potentially destructive AI, such as a recent invented photo of Pope Francis wearing a large puffer coat. Google said last month it will encode information that allows users to decipher if an image is made with AI.
Disney‘s ABC News “already has a team working around the clock, checking the veracity of online video,” said Chris Looft, coordinating producer, visual verification, at ABC News.
“Even with AI tools or generative AI models that work in text like ChatGPT, it doesn’t change the fact we’re already doing this work,” said Looft. “The process remains the same, to combine reporting with visual techniques to confirm veracity of video. This means picking up the phone and talking to eye witnesses or analyzing meta data.”
Ironically, one of the earliest uses of AI taking over for human labor in the newsroom could be fighting AI itself. NBC News’ Berend predicts there will be an arms race in the coming years of “AI policing AI,” as both media and technology companies invest in software that can properly sort and label the real from the fake.
“The fight against disinformation is one of computing power,” Berend said. “One of the central challenges when it comes to content verification is a technological one. It’s such a big challenge that it has to be done through partnership.”
The confluence of rapidly evolving powerful technology, input from dozens of significant companies and U.S. government regulation has led some media executives to privately acknowledge the coming months may be very messy. The hope is that today’s age of digital maturity can help get to solutions more quickly than in the earlier days of the internet.
Disclosure: NBCUniversal is the parent company of the NBC News Group, which includes both NBC News and CNBC.
WATCH: We need to regulate generative AI