Hacker Newsnew | past | comments | ask | show | jobs | submit | gregorkas's commentslogin

I genuinely feel that in this AI world we need the inverse. That every analogue or digital photo taken by traditional means of photography will need to be signed by a certificate, so anyone can verify its authenticity.

This already exists: https://c2pa.org , https://en.wikipedia.org/wiki/Content_Authenticity_Initiativ... . Support by camera makers is - spotty.


Doesn't this require a paid certificate? that effectively blocks open source software/hardware from implementing it.

And how do you fix the analog hole? Because if you can point your "verified" camera at a sufficiently high-resolution screen, we're worse off than when we started.

There are some techniques to detect recapture, e.g.: Moiré Pattern, Glare, JPEG Grid Artifacts, Channel Phase Shift, Screen Emission, Chromatic Aberration. If those are combined, the effort and cost to fake a photo rises significantly.

Yes, I’m more worried about the false confidence such technology could create. Implement an authenticity mechanism and it will be treated as truth. Powerful people will have the means to spoof photographic evidence.

You can have other sensors that tell you it's a screen, maybe require a Live Photo, maybe also upload to a third party service faster than generation is possible? In the end I think we'd end up somewhere like with cryptography: generating a real fake might be theoretically possible but it could be made prohibitively expensive to generate.

Depth sensor information.

Or just extract the certificate from the hardware you own.

That is presumably a very expensive endeavor. We already have hardware that attempts to mitigate this and while I think it's possible for the government it's certainly not trivial.

This is a "solved" problem. Vendors whose keys are extractable get their licenses revoked. The verifier checks the certificate against a CRL.

I'm sure Apple would love that too. More seriously, would that also mean all editing tools would need to re-sign a photo that was previously signed by the original sensor. How do we distinguish an edit that's misleading vs just changing levels? It's an interesting area for sure, but this inverse approach seems much trickier.

CAI’s Content Credential standard accommodates what you suggest, as far as re-signing/provenance, with a chain kind of approach. It supports embedding “ingredient thumbnails” in an image’s manifest, and/or the image’s manifest can embed or link back to source images that are in turn also signed [2].

It feels like the approach assumes a media environment where a professional wants to provably “show their work,” where authenticity adds value to a skeptical audience.

In that spirit, then, I understand CAI’s intention [0] to be to vest that judgment with the creator, and ultimately the viewer: if my purpose is to prove myself, I’d want to show enough links in the chain that the viewer checking my work can say “oh I see how A relates to B, to C,” and so on. If I don’t want to prove myself, well… then I won’t.

I don’t know Adobe’s implementation well enough to know how often they save a CC manifest, and their beta is vague in just referring to “editing history.” [1] I get the impression that they’re still dialing in the right level of detail to capture by default. Maybe even just “came from Firefly” and “Photoshop wuz here.”

But if I want to prove this Nikon Z9 recorded these pixels at this time and place, or “I am the BBC and yes I published this,” or “only the flying monkey was GenAI, the rest was real” I could conceivably put together a toolchain (independently of Adobe) to prove it in more detail.

[0] https://spec.c2pa.org/specifications/specifications/2.2/spec...

[1] https://opensource.contentauthenticity.org/docs/manifest/und...

[2] https://opensource.contentauthenticity.org/docs/c2patool/doc...


You'd have to provide both images, and let the end user determine whether they think it's misleading.

Some cameras support this, but usually only for raw.

Note that your cell phone camera is using gen AI techniques to counteract sensor noise.

Was that famous person in the background really there, or a hallucination filling in static?

Who knows at this point? So, the signatures you proposed need to have some nuance around what they’re asserting.


To be fair, I think just signing details about the way an image was assembled makes sense. Deciding on fake vs real doesn't have to be done at time of capture. We store things like the aperture size, sensitivity, camera name/model, etc in the EXIF data, including details about the image processing pipeline seems like a logical step. (With a signature verification scheme... and I guess also trying to embed that in the actual bitmap data)

There is no original image to recover, since we can't capture and describe every photon, so it's not a "fake vs real" image signature... that would be a UI choice the image viewer client would make based on the pipeline data in the image.


Years ago, I worked at Apple at the same time as Ian Goodfellow. This was before ChatGPT (I'd say around 2019).

I had the chance to chat with him, and what I remember most was his concern that GANs would eventually be able to generate images indistinguishable from reality, and that this would create a misinformation problem. He argued for exactly what you’re mentioning: chips that embed cryptographic proof that a photo was captured by a camera and haven't been modified.


... and the classic Penisland: https://www.penisland.net/


There was another classic from that era, powergenitalia.com, which purported to be the Italian division of Powergen, has now disappeared and apparently was a prank. Interestingly, some of the early captures of it seem to be another company, so maybe it was just an early attempt at SEO and they might have removed it due to the risk of being sued for trademark infringement: https://web.archive.org/web/20040830080331/http://powergenit...


I was once linked fagasstraps.com on an IRC channel and I expected something else.


I still chuckle at the expertsexchange.com -> experts-exchange.com migration.



"Q: Can I provide my own wood? A: In most cases we can handle your wood. We do require all shipments to be clean, free of parasites and pass all standard customs inspections."


This is cool to see. I'm from the area (although a different country), but when I was little, my father often said, 'fetch me the digitron,' referring to a pocket calculator. It was only after many years that I realized it's an actual company.


You might be from Slovenia then. I occasionally pass by the Slovenian reddit and sometimes see stories about interesting computers and designs from Yugoslavia times.

I hope to spend a month on a vacation in Slovenia this summer and visit the Računalniški muzej in Ljubljana. There's a retro computer museum in Rijeka, Croatia as well, which I unfortunately missed last year.


Monthly recurring revenue.


It allows you to select a region of the screen, then copies it to your clipboard, so you can easily paste it to other programs. It’s the best workflow there is because it requires no other running tools, and the binding is even simpler than ctrl+cmd+shift+4 on MacOS.


I imagine that with cracking the protocol with 39 hours return time... it would be a challenge.


At a mammoth 16bps...

And then the 20kW+ transmitter you'd need...


I think Voyager data rate is 160bps (when I last saw it at https://eyes.nasa.gov/dsn/dsn.html ).


It wouldn't surprise me if they've been reducing the data rate as the signal strength weakens.


Yep. But looks like Madrid is receiving data right now from VGR1 and the page says "159.00 b/sec".


cliché


Yes, they state it on the website as well: "See how well it does with your drawings and help teach it, just by playing."


This is interesting, because once it recognizes my drawing, it cuts me off before I finish, which would seem like it's adding incomplete drawings to the training set. Maybe it just wants examples of versions that it can't already recognize.


Yea, apparently two disconnected spoked-wheels are a bicycle now, based on the drawing it accepted from me.


On Android 6.0 I got the correct permissions dialog and I was able to select what the app sees and what it can do. Is this just me?


But isn't that what the app needs from your phone? I always thought that was difference then what you are giving an app permission to do when using OAuth


On Android they're able to use the OS APIs to access that information because the google account is closely tied to the OS so the permissions there are basically permissions to the phone data because with Android they're pretty much one and the same.


I also see this. It asked for I think 4 separate permissions that you could allow or deny.


Sounds like they could optimise the costs by encoding two versions of the video - one without the sound that would play by default, and the other when you click on the actual video.

The CDN invoice probably isn't the smallest of sums you can imagine :).


They wouldn't want a few second pause while the audio is buffered.


A tidbit that I found interesting recently is that their video stream is different from the audio stream source.

says a comment (by yelnatz) above


The size of the audio track is probably around 2% of the size of the whole video. I don't think they'll bother :).


2% of a bazillions of dollars is still like nearly a bazillion dollars.


They'd have to either strip audio out of the video stream on the fly, which will probably force them to use less "dumb" CDNs and incur a processing time cost (which, like most today's on-line businesses, they happily offload to their users now), or they'd have to keep two copies of the same video on their servers, which would make it cost 2x "a bazillions of dollars".


You are very probably right the simple solution makes sense. It is an interesting optimization to be able to make though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: