Thanks for this, I was not aware that a good portion of GBA songs can be exported as MIDI. But I'm guessing that with good soundfonts you can get pretty reasonable quality for many of them!
They don't use general midi standard instruments. You need the extracted soundfont because the instrument numbers are unique to each game. In order to "improve" the soundfont, you need to edit that soundfont to have higher quality instruments, you can't just switch out the whole soundfont for a different one.
Thanks, but just like WEBP I'll try to stick to regular JPEGs whenever possible. Not all programs I use accept these formats, and for a common user JPEG + PNG should mostly cover all needs. Maybe add GIF to the list for simple animations, while more complex ones can be videos instead of images.
Unfortunately being universal implies way more than just having good browser support. There are quite a few image processing programs without webp or jpeg-xl support. I'm using Windows 11 and the default image viewer can't even open webp... Also, keep in mind that due to subscription models there are many people stuck with older Photoshop versions too.
Thanks, I know about this and other workarounds. My point is, if it was truly universal you should not need anything! I bet most regular users will never even know this exists.
I never knew about this either and it's been very frustrating as I've been converting my Manga library over to webp (savings are insane) and doing any spot checking opens Edge.
Edit: After reading the comments, this doesn't seem to open in Photos App.
Webp can be really annoying once you hit certain encoding edge cases.
One customer of mine (fashion) has over 700k images in their DAM, and about 0.5% cannot be converted to webp at all using libwebp. They can without problem be converted to jpeg, png, and avif.
Just out of curiosity, what's the problem libwebp has with them? I wasn't aware of cases where any image format would just cross its arms and refuse point blank like that.
We have never been able to resolve it better than knowing this:
Certain pixel colour combinations in the source image appear to trip the algorithm to such a degree that the encoder will only produce a black image.
We know this because we have been able to encode the images by (in pure frustration) manually brute forcing moving a black square across the source image on different locations and then trying to encode again. Suddenly it will work.
Images are pretty much always exported from Adobe, often smaller than 3000x3000 pixels. Images from the same camera, same size, same photo session, same export batch will work and then suddenly one out of a few hundred may become black, and only the webp one not other formats, the rest of the photos will work for all formats.
A more mathematically inclined colleague tried to have a look at the implementation once, but was unable to figure it out because they could apparently not find a good written spec on how the encoder is supposed to work.
"JPEG XL" is a little bit of a misnomer as it's not just "JPEG with more bits". It supports lossless encoding of existing content at a smaller file size than PNG and allows you to transcode existing JPEGs recoverably for a 20% space savings, the lossy encoding doesn't look nearly as ugly and artifacted as JPEG, it supports wide gamut and HDR, and delivers images progressively so you get a decent preview with as little as 15% of the image loaded with no additional client-side effort (from https://jpegxl.info/).
It is at least a very good transcoding target for the web, but it genuinely replaces many other formats in a way where the original source file can more or less be regenerated.
Honestly, I don't like how webp and now jpegxl support both a lossless and lossy mode.
Let's say you want to store images lossless. This means you won't tolerate loss of data. Which means you don't want to risk it by using a codec that will compress the image lossy if you forget to enable a setting.
With PNG there is no way to accidentally make it lossy, which feels a lot safer for cases you want lossless compression.
You can use too few bits of color depth to get lossyness in PNG. More generally, I can't find myself very sympathetic to "I don't want a format that can do X and Y, because I might accidentally select X when I want Y in my software". You might accidentally choose JPG when you want PNG too. Or accidentally resample the image. Or delete your files.
If you want a robust lossless workflow, PNG isn't the answer. Automating the fiddly parts and validating that the automation does what you want is the answer.
PNG can and is often used in a lossy way. Reducing the number of colors so PNG8 can be used instead of PNG24/PNG32 is the most common way to do that. Tools like pngquant exist, and for example Photoshop when exporting to PNG also has an option to reduce the colors, to flatten the image (remove alpha), or to change the colorspace.
16-bit PNG files can easily accidentally be reduced to 8-bit, which is of course a lossy operation. Animated PNG files can easily get converted into a still image (keeping only the first frame). CMYK images will have to be converted to RGB when saving them as PNG, which is also a lossy operation. It can happen that an image gets created as or converted to JPEG and then gets saved as PNG - which of course is a bad and lossy workflow, but it does happen.
So I don't agree that with PNG there is no way to accidentally make it lossy.
In any case: lossless or lossy is not a property of a format, but of a workflow. For keeping track of provenance information and workflow history, I would recommend looking into JPEG Trust / C2PA, which is a way to embed as metadata what happened to an image since it was captured/generated. Relying on the choice of image format for this is fragile and doesn't allow expressing the nuances, since reality is more complicated than just a binary "lossless or lossy".
> Specifically for JPEG files, the default cjxl behavior is to apply lossless
recompression and the default djxl behavior is to reconstruct the original
JPEG file (when the extension of the output file is .jpg).
You're right, however, that you do need to be careful and use the reference codec package for this, as tools like ImageMagick create loss during the decoding of the JPEG into pixels (https://github.com/ImageMagick/ImageMagick/discussions/6046) and ImageMagick sets quality to 92 by default. But perhaps that's something we can change.
There's odd cases where it still has uses. When I was a teacher, some of the gamifying tools don't allow video embeds without a subscription, but I wanted to make some "what 3D operation is shown here" questions with various tools in Blender. GIF sizes were pretty comparable to video with largely static, less-than-a-second loops, and likely had slightly higher quality with care used to reduce color palette usage.
But I fully realize, there are vanishingly few cases with similar constraints.
If you need animated images in emails or text messages, GIF is the only supported format that will play the animation. Because of the size restrictions for these messaging systems the inefficient compression of GIFs is a major issue.
Videos and images are treated very differently by browsers and OS:es. I'm guessing the better suggestion would be to use apng or animated avif if you are looking for a proper gif alternative.
Yes, by using the <picture> element with <source> elements declaring the individual formats with the last one being a regular <img> with the gif.
Or you could use content-negotiation to only send avif when it's supported, but IMO the HTML way with <picture> is perhaps clearer for the client and end user.
I think the webp problem was due to browsers supporting webp but not supporting animation, transparency or other features, so content negotiation based on mime types (either via <picture> or HTTP content-negotiation) did not work properly. Safari 16.1-16.3 has the same problem with AVIF, but that is a smaller problem than it was with webp.
So, 2 people submitted the same thing earlier today and got ignored, but this time it makes the front page? And this seems to be happening often here. Not the most encouraging system to submit something...
Are you submitting for Internet points or because you think something is interesting, important, or cool? Discussion is more interesting, even if it is limited, when you aren't trying to just get points.
My point here is that submissions were the exact same thing. Is it deemed "interesting" or not depending solely on the user? Logic does not check here.
I find this quite worrying: with this much decline SO might end up disappearing. This would be a very bad thing because in some answers there are important details and nuances that you only see by looking at secondary answers and comments. Also, this seems to imply that most people will just accept the solutions proposed by LLMs without checking them, or ever talking about the subject with other humans.
Pretty different, actually. You don't have to worry about possible malware, and you get to support the developers of games you like (aka "vote with your wallet"). Also even if you get your license revoked it's not such a big deal as in other stores, where in some cases they may even delete the game from your devices remotely, without warning. The offline installer is a guarantee for you as a consumer.
Malware is easy to avoid if you know where to download from and if you engage in the herculean task of uploading the .exe to something like virustotal.com in case of any doubts. Not like it matters much anyway seeing how there are examples of GOG games using cracks from the internet anyway.
Supporting developers is a weak argument considering that GOG's claim to fame is that they're selling old games where the development studio no longer exists or has been bought out by a corporate entity like EA.
Revoking my license isn't a big deal? I paid real money for the game.
The offline installer is about as much of a guarantee of anything as a pirated ISO is.
Am I the only one thinking that 1.7x is a very weird way of saying "70% more"? It's even wrong since, like other comments point out, 1.7x MORE would in fact be 2.7 times as much. Which is not what the bug numbers say.
We may be facing a grim situation in a few years because of this. Right now most consumer-grade storage is flash memory, and all of it suffers from this problem. SSDs, pendrives, SD cards, Compact Flash... Apparently games for the Nintendo 3DS and PS Vita are already suffering from this, and people losing photos in faulty SDs is hardly news.
I think what the author means here is that doing prep work is useless if you don't follow on it. No amount of investigation, planning or organization is going to be worth anything until you finally apply it to do the actual core of the thing.
A long interview with ZSKnight, creator of the awesome ZSNES (one of the first SNES emulators). The interviewer is Zophar, host of the well known Zophar's Domain, classic site for emulation and romhacks. So 2 retro legends here!
reply