I think that’s not the problem that this technology is intended to solve.
It’s not a “Is this picture copied from someone else?” technology. It’s a “Did a human take this picture, and did anyone modify it?” technology.
Eg: Photographer Bob takes a picture of Famous Fiona driving her camaro and posts it online with this metadata. Attacker Andy uses photo editing tools to make it look like Fiona just ran over a child. Maybe his skills are so good that the edits are undetectable.
Andy has two choices: Strip the metadata, or keep it.
If Andy keeps the metadata, anyone looking at his image can see that it was originally taken by Bob, and that Fiona never ran over a child.
If Andy strips the metadata (and if this technology is widely accessible and accepted by social media, news sites, and everyday people) then anyone looking at the image can say “You can’t prove this image was actually taken. Without further evidence I must assume that it’s faked”.
I think spinning this as a tool to fight AI is just clickbait because AI is hot in the news. It’s about provenance and limiting misinformation.
Because the vast majority of “paparazzi” and controversy pictures aren’t taken by Jake Gyllenhal. They are taken by randos on the street with phones who when sell their picture to TMZ or whatever.
And they aren’t going to be paying for an expensive leica camera. And samsung and apple aren’t going to be licensing that tech.
5.1, 5.2, 5.3, 5.5, and 5.6 all require basically universal adoption for this to at all be useful. And 5.4 and 5.7 (as well as many of the rest) already fall apart once you realize this is metadata that people have to opt in to keeping. 5.4 in particular feels like it is prone to breaking if there are edits in a video for flow or to remove sensitive information.
Much like “The Blockchain” and NFTs, this sort of touches on an issue but is a horrendously bad and pointless implementation.
I don’t quite get why some of those cases require universal adoption. News photos: You just need one big news company to say “we’re giving all our photographers a camera with this tech” and then it serves its purpose.
You see a headline “SHOCKING photo published by MegaNewsCorp will send you into a coma!” then you can validate that it came from a MegaNewsCorp photographer. If you trust MegaNewsCorp, then the tech has done its job. If you didn’t trust MegaNewsCorp already, then this tech changes nothing. I think there is moderate value in that, overall.
The story of this tech is getting picked up and thrown around by bad tech journalism, being game-of-telephone’d into some kind of game changer.
Plenty of open standard live and die by whether or not one big player decides to adopt them.
… I literally just explained that a lot of those photos are crowd sourced. Which gets back to needing more or less universal adoption. And even then: Maybe I’ll give CNN a picture of a republican beating a child if I can strip the metadata. I am not giving that if it is going to trace back to me.
So then news orgs who care about provenance have to stop copying social media posts and treating them like well-researched journalism. Seems like a win to me.
You gave an example of TMZ sourcing photos from randos, but they’re likely not the target customer for this tech. If they cared about integrity they wouldn’t be reporting celebrity gossip.
For news companies posting syndicated images, then those come from a cadre of photographers who are most likely to own the newest most expensive cameras. Surely it’s not inconceivable that as this tech rolls out more, Agence-France-Presse, Getty, or AP could require all photos submitted to them to have this metadata, thus passing the benefits along to any news agency using their images.
If you’re talking about photo sources taken from everyday people, then yes: They won’t have this technology in the short term, maybe not ever. Then again, I don’t get my news from TMZ.
I think blockchain is dumb because it fails to achieve its stated goals while also harming society. I think this is a system with marginal use case and minimal licensing overhead to integrate into future cameras, so overall my take is “not dumb” and “probably useful”.
I think that’s not the problem that this technology is intended to solve.
It’s not a “Is this picture copied from someone else?” technology. It’s a “Did a human take this picture, and did anyone modify it?” technology.
Eg: Photographer Bob takes a picture of Famous Fiona driving her camaro and posts it online with this metadata. Attacker Andy uses photo editing tools to make it look like Fiona just ran over a child. Maybe his skills are so good that the edits are undetectable.
Andy has two choices: Strip the metadata, or keep it.
If Andy keeps the metadata, anyone looking at his image can see that it was originally taken by Bob, and that Fiona never ran over a child.
If Andy strips the metadata (and if this technology is widely accessible and accepted by social media, news sites, and everyday people) then anyone looking at the image can say “You can’t prove this image was actually taken. Without further evidence I must assume that it’s faked”.
I think spinning this as a tool to fight AI is just clickbait because AI is hot in the news. It’s about provenance and limiting misinformation.
Which does not solve that at all
Because the vast majority of “paparazzi” and controversy pictures aren’t taken by Jake Gyllenhal. They are taken by randos on the street with phones who when sell their picture to TMZ or whatever.
And they aren’t going to be paying for an expensive leica camera. And samsung and apple aren’t going to be licensing that tech.
There’s no accounting for adoption, true. Seems like the use cases still have value though: https://c2pa.org/specifications/specifications/1.3/explainer/Explainer.html#_use_case_examples
As for licensing, the specs are released under Creative Commons, so anyone should be able to implement it.
People can write whatever they want
5.1, 5.2, 5.3, 5.5, and 5.6 all require basically universal adoption for this to at all be useful. And 5.4 and 5.7 (as well as many of the rest) already fall apart once you realize this is metadata that people have to opt in to keeping. 5.4 in particular feels like it is prone to breaking if there are edits in a video for flow or to remove sensitive information.
Much like “The Blockchain” and NFTs, this sort of touches on an issue but is a horrendously bad and pointless implementation.
I don’t quite get why some of those cases require universal adoption. News photos: You just need one big news company to say “we’re giving all our photographers a camera with this tech” and then it serves its purpose.
You see a headline “SHOCKING photo published by MegaNewsCorp will send you into a coma!” then you can validate that it came from a MegaNewsCorp photographer. If you trust MegaNewsCorp, then the tech has done its job. If you didn’t trust MegaNewsCorp already, then this tech changes nothing. I think there is moderate value in that, overall.
The story of this tech is getting picked up and thrown around by bad tech journalism, being game-of-telephone’d into some kind of game changer.
Plenty of open standard live and die by whether or not one big player decides to adopt them.
… I literally just explained that a lot of those photos are crowd sourced. Which gets back to needing more or less universal adoption. And even then: Maybe I’ll give CNN a picture of a republican beating a child if I can strip the metadata. I am not giving that if it is going to trace back to me.
So then news orgs who care about provenance have to stop copying social media posts and treating them like well-researched journalism. Seems like a win to me.
Sure. That is what is happening…
You gave an example of TMZ sourcing photos from randos, but they’re likely not the target customer for this tech. If they cared about integrity they wouldn’t be reporting celebrity gossip.
For news companies posting syndicated images, then those come from a cadre of photographers who are most likely to own the newest most expensive cameras. Surely it’s not inconceivable that as this tech rolls out more, Agence-France-Presse, Getty, or AP could require all photos submitted to them to have this metadata, thus passing the benefits along to any news agency using their images.
If you’re talking about photo sources taken from everyday people, then yes: They won’t have this technology in the short term, maybe not ever. Then again, I don’t get my news from TMZ.
I think blockchain is dumb because it fails to achieve its stated goals while also harming society. I think this is a system with marginal use case and minimal licensing overhead to integrate into future cameras, so overall my take is “not dumb” and “probably useful”.