Now is the time for news organizations to address AI in their ethics policies

Despite all the unknowns out there when it comes to artificial intelligence, AI is advancing at an alarming pace and news organizations are running out of time to address it in their newsroom ethics policies. We can’t afford to be late to this discussion.

How, when and why are we using computer-generated text and media? How are we labeling them? How are we determining what’s created by AI and what’s not? What barriers and disclosures do we need in place? How can we maintain trust with our audience as bad actors create deepfakes and other photo-realistic media using AI?

Those are just a few of the questions modern newsrooms are going to have to grapple with, and the time to start thinking about that is now. The first wake-up call for a little urgency on this issue happened for me at this year’s ONA conference.

The beloved “opening” session is reserved for the group’s “Annual Tech Trends in Journalism.” Now in its 15th year, the torch has been passed from convention-staple Amy Webb to a new class of presenters. This year’s speaker was Anders Grimstad, head of foresight and emerging interfaces at Schibsted, a Scandinavian media company with what seemed liked a brand or product for every industry out there.

The presentation was split between trends around synthetic media/content and spatial experiences. Leaving the spatial metaverse stuff for another day, there was a lot to unpack around AI-created text, video, audio and images.

We’re not talking about the LA Times quakebot or sports stories built on box scores. Nowadays, we have AI winning art competitions, acing standardized tests and writing complete blog entries, according to Grimstad.

That last example included a product called Copysmith. You can learn more about it at copysmith.ai, but this marketing line should get a rise out of the journalists in the room: “What could your team achieve spending less time writing and more time launching?”

Think about the ethics around that sentence for a moment. What happens when the journalists are software and the editors are “prompt designers,” a potential job title Grimstad floated for people who write prompts for AI to complete?

I’ll pause here briefly for slideshow featuring AI-generated images for the phrase “journalists complain about artificial intelligence.”

  • This computer-generated image was created by DALL·E 2 with the prompt "journalists complain about artificial intelligence" on Oct. 6, 2022.
  • This computer-generated image was created by DALL·E 2 with the prompt "journalists complain about artificial intelligence" on Oct. 6, 2022.
  • This computer-generated image was created by DALL·E 2 with the prompt "journalists complain about artificial intelligence" on Oct. 6, 2022.
  • This computer-generated image was created by DALL·E 2 with the prompt "journalists complain about artificial intelligence" on Oct. 6, 2022.

My second wake-up call was last night when I saw a tweet from Kevin Roose of the New York Times. Referencing last month’s piece on the rapid rise of AI, it seems things are accelerating: “Cannot really emphasize enough how fast AI is moving. There have been several major releases *in the month since I wrote a column about how fast AI is moving*, including OpenAI’s Whisper (speech-to-text transcription) and now text-to-video.”

Here’s the tweet. Here’s the original piece.

The point around all of these wake-up calls (how many times can I wake up?) is that the time to address AI and its implications around news is now. How’s that for unsolicited advice nobody asked for?

We don’t need to know all the technology and features and job titles and all the waves that are coming for journalism, but we do need to prepare for them. In one of Grimstad’s final slides, he lays out an optimistic future for news:

Anders Grimstad presents at the 2022 ONA conference in Los Angeles on Sept. 22, 2022. His optimistic scenario outcomes are listed as "editorial teams help society differentiate between reliable and unreliable information," "newsrooms go where their customers are headed, launching services in the metaverse," "media companies lead in transparency, adopting Web3 tools to track provenance," "using Al tools allows newsrooms to seek out and cater to niche & minority audiences," and "ethical exploration and explanation of new tech is lead by media product teams." (ONA)
Anders Grimstad presents at the 2022 ONA conference in Los Angeles on Sept. 22, 2022. (ONA)

That’s actually not too bad compared to the typical dystopia-mongering going around with AI. But again, for the rosy future to happen (I’ll spare you the pessimistic slide), the work has to happen now.

After all these wake-up calls, I hope newsrooms out there are revising their ethics policies. It’s time to ask tough questions about how we use (or don’t use) AI and how we’ll disclose that. We can’t afford to keep hitting the snooze button.

Also, here is this gem:

This computer-generated image was created by DALL·E 2 with the prompt "Olsen Ebright complains about artificial intelligence" on Oct. 6, 2022.
This computer-generated image was created by DALL·E 2 with the prompt “Olsen Ebright complains about artificial intelligence” on Oct. 6, 2022.