Today, an organization called the Coalition for Content Provenance and Authenticity released a specification for content provenance; as is so often the case, I’m indebted to Gary Price for bringing this to my attention. This specification is interesting for several reasons. It comes out of the content industries and related tool builders such as Adobe; it’s framed in part as a response to the spread of t deepfakes, and as a way of preventing modification or corruption of published material. The fundamental ideas here are not new, and there’s a large literature on provenance metadata as well as on chains of trust in metadata.
Let me also be clear about what the specification does and does not do, at least as I understand it based on a fairly quick reading.. There is no magic here that ensures, for example, that a video clip is a recording of something that actually happened. What this work can do is allow an entity (such as a news bureau) to certify that they’ve “published” (and hence presumably vetted and are certifying true) a video clip, and giving you a way to ensure that you have a true copy of what they’ve published.
For more details see this announcement, which includes links to the relevant technical documents and also to the recording of a webinar that took place yesterday (follow the links to register for the event).
https://c2pa.org/post/release_1_pr/
I’d be interested in hearing from organizations such as libraries or archives that are thinking about making use of this work.
Clifford Lynch
Director, CNI