Media powerhouses push for protection against generative AI

The New York Times

As artificial intelligence advances, newsroom leaders are taking action to protect their content from AI-driven aggregation and disinformation. 

Major media organizations such as The New York Times and NBC News are in talks with a trade union, media firms, and big tech platforms to set guidelines for using their content by natural language AI tools.

One emerging generative AI trend can generate text or images in response to complex queries. 

Read More: “We’re all scared a bad guy could grab it” – Bill Gates on Artificial Intelligence

Programs like Open AI’s ChatGPT and Google’s Bard are trained on vast amounts of publicly available information, including journalism and copyrighted art. 

Concerns arise when these programs generate content that resembles or directly lifts from original sources, potentially undermining publishers’ business models.

It will diminish trust in online news and flood the digital space with inaccurate or misleading information.

Digital Content Next, the trade union representing over 50 major US media organizations, recently released seven principles for the “Development and Governance of Generative AI.” 

Read More: Meta surges in AI dominance with leaked software

These principles address critical issues such as safety, compensation for intellectual property, transparency, accountability, and fairness. 

They aim to initiate discussions and do not impose industry-defining rules. 

Industry leaders who actively engage in dialogues and workstreams to address the matter recognize the urgency to establish rules and standards for generative AI.

Generative AI presents both opportunities and threats to the news industry. 

Read More: Apple limits employee use of ChatGPT over data leak concerns

Amid the struggle faced by digital media companies due to dominant platforms like Google and Facebook, there is a push for Big Tech companies to compensate news organizations for the content used to train AI models.

In addition to financial concerns, the news industry recognizes the crucial need to combat the spread of misinformation facilitated by AI. 

Establishing the authenticity of content becomes a significant challenge, and misinformation can cause confusion, panic, and damage to brands. 

Newsrooms and technology companies are working on methods to verify content, including visual verification and encoding information in images to identify if they were created using AI. 

Need Career Advice? Get employment skills advice at all levels of your career

The fight against disinformation may lead to an “AI policing AI” scenario, where both media and technology companies invest in software capable of identifying and labeling real versus fake content.

While the US government may regulate AI development by Big Tech, the pace of regulation might lag behind the speed at which technology is deployed. 

Media executives anticipate potential challenges and recognize the need for swift solutions through partnerships and digital maturity. 

With AI evolving rapidly, collaboration among industry players and regulatory measures will play a crucial role in navigating the complex landscape of AI in newsrooms.

Follow us on YouTubeTwitterLinkedIn, and Facebook.

Leave a Comment