Google joins coalition to label AI-generated content
Published Date: 2/8/2024
Source: axios.com

Google is joining Microsoft, Meta and Adobe in supporting a standard for labeling media that can describe who created an image or video, when and how it was created, and the credibility of its source, the company announced today.

Why it matters: With Android smartphones holding a 70% global market share, and 2.5 billion YouTube users, Google's move gives critical mass to the industry's effort to label AI-generated content and combat misinformation.


What's happening: Google is the latest major company to join the Coalition for Content Provenance and Authenticity (C2PA) and to serve on its steering committee.

  • Google sees the C2PA standard as complementary to its existing AI information efforts: Google DeepMind's SynthID, the "About this Image" search tool and YouTube's labels for altered or synthetic content.

The big picture: The ultimate goal of C2PA is to allow "content credentials" to be applied from the moment an image is captured or otherwise created.

  • Meta announced Tuesday that it's building new tools to identify C2PA metadata in images uploaded to Facebook, Instagram and Threads so that labels are automatically applied to AI-generated images on the platforms.
  • OpenAI announced this week that it will add C2PA metadata to images created with ChatGPT and the API for the DALL-E 3 model.
  • Leica added the ability to add C2PA authentication metadata directly to its hardware last year, Ina reported.

Catch up quick: The C2PA effort emerges from an industry coalition called the Content Authenticity Initiative, led by Adobe.

  • The firms believe the best way to combat misinformation is to provide consumers with context about how content has been created and edited.
  • The White House also urged "watermarking" of images in its AI Executive Order.

Between the lines: Google hopes to shape the evolution of C2PA by having a seat at the table, and that its involvement will prompt others to sign-on, creating the near ubiquity needed to make the standard effective.

  • Bad actors are unlikely to self-label AI content with any type of metadata, which means that enough trustworthy content must be labeled to make the unlabeled material stand out.

What they're saying: Google is still working out implementation details, but "over time there could be some differences" in how users experience Google products, Laurie Richardson, VP of trust and safety at Google told Axios.

  • "We want to understand where content comes from and if it's been edited, for it to be resilient to tampering, and for it to be interoperable," Richardson said, to help users make "informed decisions."
  • "Google's membership is an important validation for the C2PA's approach," says Andrew Jenks, C2PA chair and a media provenance specialist at Microsoft.

The other side: C2PA critics argue that its labels can be tampered with.

  • Matt Medved, founder of NowMedia, tells Axios that C2PA "relies on embedding provenance data within the metadata of digital files, which can easily be stripped or swapped by bad actors," arguing that only "blockchain's immutable ledger" can give true confidence in content provenance.

The bottom line: Google's move helps C2PA might win the AI labeling battle, but it still risks losing the war.

  • Those most likely to share misinformation are often the least likely to take notice of information labels, let alone metadata.
  • C2PA isn't going to stop repeats of the fake Taylor Swift porn images that flooded social media this month, because it can't compel image generation tools to comply.

This story has been corrected to reflect that Meta is building new tools to identify C2PA metadata in images uploaded to Threads (not WhatsApp), Facebook and Instagram.