I don't mean this as a dig at the developers of this particular site because my annoyance is more of a cumulative thing as opposed to being about this website in general, but IMO this is yet another website that does one simple thing that should be a library (or at the very least a documented web api) and not a website.
What? The designers who will be using this to convert PNG's won't have the slightest idea how to access an API. I, as a programmer, don't want to write code to do this either. I think creating a site for this is awesome.
It would be cool if they created an API as well, and a library (but for what language? so many compatibility issues...), but to say this should "not be a website" is entirely backwards. A website is the most user-friendly and accessible platform possible, and makes perfect sense to be the first step.
I have to agree with the OP here. Making this a webpage is silly.
If it were a library the program that created the PNG could have saved it in this compressed form in the first place.
Choosing the language isn't an issue, either. The obvious choice is C because that's what regular libpng is written in. And every language in existence provides a way to interface with C.
"What? The designers who will be using this to convert PNG's won't have the slightest idea how to access an API."
For designers it should be a Photoshop plugin (built on the theoretical library) or just a desktop app that can bulk convert entire directories.
While dealing with 20 images at a time is certainly nicer than one at a time, a drag & drop website UI like this one is an absolute workflow killer on the design side.
The problem is that websites are the most accessible way to deliver an interface for humans. But nobody else can do anything with the tech, and you're limited to the one workflow that the creator had thought of.
If this were a library or a command line app like pngcrush, web services could be spawned from it in days, easily. Or desktop apps. Or editor plugins. But in the current format, it's inextensible.
(disclaimer: this isn't always true, but it's true in this case)
You're right, proper transparency support is step 1 -- but that isn't to say that GIMP couldn't have some sort of automatic lossy compression tool (akin to this) on top of that tech.
EDIT: I suppose this software is equivalent to gimp's "Automatically select palette" option when converting images to indexed. Perhaps that option could be replaced with a list of algorithms to choose from, like how the Size dialog lets you choose your own scaling algorithms.
Yes, exactly. This tool is simply because "Automatically select palette" doesn't support transparency. If it did there would be no need for this.
I'm not sure you really need multiple algorithms, the algorithm itself isn't anything special as far as I know - it just supports transparency that's all.
You can get somewhat similar results by selecting all the transparent and partially transparent parts and saving the selection. Then flatten the image (i.e. mix the transparency into the background color), reduce the color depth, then use the color to alpha option only on the selection (not the whole image) to subtract the color and bring back the transparency.
Then count how many colors you have, if you have too many, undo everything, and choose a lower number for color depth and try again. It helps to choose a color that does not otherwise exist in the image for the background mixing color.
I believe this website uses pngquant to perform the optimization. You can get the tool from http://pngquant.org (it's open-source and pretty easy to use as a library too).
Websites of 2012, I am disappoint.