Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You haven't been following the story, then, because Samsung phone cameras have been accused of doing exactly that: using a training set of publicly available images to "enhance" people's images by filling in details with a ML model.


Those people don't understand how anything works though so I don't care what they said. And how would they know if it used "publicly available images"? It's the moon, of course it's publicly available. It's in the sky.


You're focusing on the "publicly available" part but it's irrelevant to my point. Samsung's camera software is literally doctoring people's photos with fake data from its ML models. From the perspective of photographic integrity, it does not matter whether they scraped their training set off public photography sites or if they hired a team of photographers to build a dataset from scratch. To the user the effect is the same: it is not really an authentic photograph that is being created, it's a hybrid derived from (to the user) unknown sources.


All photos work like this - cameras don't know what color things are, the color you see is made up by the camera.

You've never taken an authentic photo on a digital camera.

But… they don't use ML models made from publicly available images.


There's a difference between ICC colour profiling (which is embedded in the RAW file and can be changed by the photographer without any degradation in image quality) and what Samsung is doing. The former can only affect the image colours globally (for example, allowing adjustment of white balance) whereas Samsung's changes are local, affecting image data in a profound way that can add hallucinated details which were not present in the original scene. One rather disturbing example is that the tool added teeth to baby photos!


White balance adjustments to camera raws don't generally use ICC profiles, they do whatever the combination of camera format and raw processor want.

Which can involve local adjustments, since you may want to process people differently from skies. So object recognition/segmentation models can definitely be involved, or what's essentially an upscaling model for better demosaicing. (That wouldn't be trained on public images though, unless they were the right format of camera raw.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: