Hacker Newsnew | past | comments | ask | show | jobs | submit | cedricd's commentslogin

I've scanned a few hundred images using an iPad as the light source. I've tried both a white screen and a bluish screen designed to basically invert the orange cast from the negative.

Both seem to work well. The bluish thing works quite well, but it turns out that different rolls need slightly different light color to compensate, so it wasn't worth the trouble. In the end the best result was buying a license for Negative Lab Pro[0] to post process everything

[0]: https://www.negativelabpro.com/


That's a good recommendation, thank you. It's amazing how complicated film photography has become in the digital era! My next task is to ditch Lightroom/Photoshop subscription. I'm going to give Darktable a go. Edit: aaah, I see NegativeLabPro needs Lightroom, hurrumph...


Seconded. NLP is well worth the money - not just in results, quality, and time saved, but also in finding joy in shooting and scanning color.


I held out for sometime myself, but NLP is excellent. It works particularly well with an old Pakon I still use as well as using a camera to “scan”.


I think a statement like that merits a source. You mentioned tires being ablative. 30k miles of tires being worn off ~= few pounds of material vs 600 gallons of fuel (assuming a 50mpg ulev).


Exactly. My driving created ~400 kg CO2 a month, no way tires are contributing anything close to that. How would they even.


Netflix did VMAF for this: https://github.com/Netflix/vmaf

It checks a reference video against an encoded video and returns a score representing how close the encoded video appears to the original from a human perspective.


that said, iiuc, SSIMULACRA 2.1 is generally considered a strictly better quality measurement.


Citation needed.


https://github.com/cloudinary/ssimulacra2?tab=readme-ov-file... shows a higher correlation with human responses across 4 different datasets and correlation metrics for one.

also see https://jon-cld.s3.amazonaws.com/test/ahall_of_fshame_SSIMUL... which is an ab comparison of a lot of images where it gives 2 versions, one preferred by ssimulacra, the other preferred by vmaf


The authors of the metric found some cases where it works better is not the same thing as it being widely considered to be better. When it comes to typical video compression and scaling artifacts VMAF does really well. To prove something is better than VMAF on video compression it should be compared on datasets like MCL-V, BVI-HD, CC-HD, CC-HDDO, SHVC, IVP, VQEGHD3 and so on (and of course Netflix Public).

TID2013 for example is an image dataset with many artifacts completely unrelated to compression and scaling.

- Additive Gaussian noise - Additive noise in color components is more intensive than additive noise in the luminance component - Spatially correlated noise - Masked noise - High frequency noise - Impulse noise - Quantization noise - Gaussian blur - Image denoising - JPEG compression - JPEG2000 compression - JPEG transmission errors - JPEG2000 transmission errors - Non eccentricity pattern noise - Local block-wise distortions of different intensity - Mean shift (intensity shift) - Contrast change - Change of color saturation - Multiplicative Gaussian noise - Comfort noise - Lossy compression of noisy images - Image color quantization with dither - Chromatic aberrations - Sparse sampling and reconstruction

Doing better on TID2013 is not really an indication of doing better on a video compression and scaling dataset (or being more useful for making decisions for video compression and streaming).


Only a guess, but drag would probably be a concern and could very likely undo the effects of the solar arrays.


It asked specifically if they couldn't make payroll for the next 30 days

Also -- it's 30% of those who responded to the survey. So lots of startups could be banking with SVB and have not bothered to fill it out. Maybe those less likely to make payroll would be more likely to fill out the survey


thank you very much


I'm fairly sure that Bitcoin mining just isn't as profitable to mine with a GPU -- it's more cost-effective to use an ASIC designed for that purpose.


You're definitely right, but one mitigating factor is that Starlink satellites are in a pretty low orbit. They'll naturally decay relatively quickly and drop out of orbit at end of life.

Edit: a link that goes over some of this https://www.spacex.com/updates/#sustainability


I've been liking https://fathom.video a lot. Integrates well with Zoom. I think you can choose which recordings are shared with teams -- and their transcription is pretty good.



I'm glad the author also points out how customer (mis)use can blow up data warehouse costs too. No matter how efficient Snowflake could get, using the warehouse too much or with unnecessary queries will ultimately have a larger impact.

The trend in the data space currently is for usage to increase -- as more companies adopt dbt they're running more and more prebuilt (materialized views) queries on a scheduled basis, rather than on demand. This is overall a good thing in that data is becoming easier to manage and use, but it does come at an increase in warehousing costs.

I think eventually the pendulum will swing back to tools that help optimize warehouse usage, as long as they allow for the same increase in productivity as dbt (disclosure - I work for one such company)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: