iOS Safari seems to render it with no performance issues. Interestingly, it can’t seem to decide whether the background is black or green—it’s different each time I load the gif.
Browsers and other players will generally be fine, as they render the image progressively. However, editors /processing tools which attempt to load it as a series of frames or a video will usually break, as memory consumption explodes, unless they have a smart disk buffer system to handle it.
I wouldn't be surprised if some poorly implemented players were built on top of abstraction layers that end up flattening the whole thing too and also break, but browsers at least generally do it right.
Why not cap animated gifs at a maximum duration or file size? GIFs original use cases are already served by formats better suited to the modern era.
Their continued popularity seems due mostly to their historical auto play behavior. One that apps are so reluctant to disrupt we're filling landfills with electronic waste to keep up with ever larger and longer meme animations.
Actually software which edits animated GIFs doesn't have to crash per se, it's all about how smartly it's implemented, and because GIF editing isn't exactly a sprawling industry, most apps tend to be, well, not that smart, and so edge cases can get them.
Formats don't have to be fully implemented to their original spec for all time. If it's not serving us well today then we can change how our software uses it.
If the goal is to stop obscene memory and bandwidth bloat then changing the default to a click-to-play/load would be better. My thinking is too change the incentives so producers aren't exploiting an old format to force autoplay at the expense of wasted resources and user control.
i'm pretty surprised apng hasn't really displaced gif, at least not completely. a lot of people don't seem to even be aware pngs are animatable these days
Lack of awareness is the main issue by far, but momentum is another. gifs are doing the job and people already have tools they are used to, so there needs to be some compelling reason to switch. For many small animated icons the file-size difference isn't going to be massive and an 8-bit pallet is usually sufficient, and once you start wanting larger animations and full-colour people have already moved to video codecs instead or for non-video-like animations maybe even manipulating SVG. There are no doubt sweet spots where APNG is ideal, or even the only really good option, though I can't think of any that would be common (wanting an animation with an alpha-channel rather than gif's all-or-nothing, maybe).
Another matter is compatibility. IE11 is a no-go which even after the recent announcement will kill APNG for some. And while Edge now supports it this has only been the case since the switch to being Chromium based (so the beginning of last year).
I think this is because video codecs superseded everything else. An MP4 file is going to be roughly the same size as an apng, while enjoying hardware acceleration on a lot of devices out there.
There's still the weird niche of lossless compression where apng or webp would be preferred.
20 years ago the case was stronger but video codecs have both soaked up the performance wins and do not expose new security exposure.
Based on APNG and JPEG 2000, the only question I have for new formats is how they plan to get browser support. It’s a hard path to relevance unless you have a good answer for that (even if it’s, say, a WASM fallback).
a wasm fallback for a format is a perverse yet interesting idea. Seems like the main hurdle for new formats (at least patent unencumbered ones) is safari. webp finally made it in, av1, there's a pull request but we'll see what happens
We just need an actual replacement that has the same behaviours and actually works everywhere, as gif does. Videos are not a replacement, and other image formats don't work everywhere
H.264 level 4.0 seems to have similar behaviors and works on a very wide variety of platforms in my experience.
Chrome, Firefox, Edge, Android, and iOS. That covers most of what we want, right? It fails on say... a 2009 era netbook or Android Gingerbread, but we gotta draw the line somewhere.
Even if you do care about Android Gingerbread: H264 3.0 Baseline profile IIRC worked on that (though its been a decade, so maybe I'm getting version numbers mixed up...). Going back to H.264 3.0 Baseline would reduce your compression-efficiency (more distortion / noise at same filesize, or larger filesize for same levels of distortion), but greatly improve your compatibility with decade-old devices if you cared.
Even H.264 3.0 Baseline is a far superior format compared to GIF though.
-------------
Lol audio is a mess though. But video-only is actually way better than most people expect.
Most of gfycat's traffic these days is .mp4 files that pretend to be gifs. Even if you upload a gif, its converted into .mp4 because its a far more efficient transmission codec.
I'm sure there's some javascript / backend logic that handles some corner cases. But... yeah. A lot of self-looping .mp4 stuff seems to be solved. The <video> tag has been getting more and more consistent these days.
I just do some hobby stuff though. I only test on the stuff close to me (chrome, edge, firefox, my phone). So I can't say too much about reliability on older / more obscure platforms.
Twitter does the same, and I (along with other people) hate it. I mean... nice that it saves data, but I can't save it. To download an image is SO simple, but have to rely on 3rd party services to convert the video back to gif if I want to post it as gif on twitter later, or send as gif on whatsapp.
I know enough about the debugging terminals in Chrome and Firefox to just "save-as" the .mp4 file itself. So I personally haven't had any problems with saving or sharing .mp4s. (Most commonly: grabbing some animated .mp4 meme and copy/pasting it into Discord)
But yes: its weird that Chrome / Firefox don't have easy-to-use "save as" buttons on .mp4s. But just grab the .mp4 and share the .mp4 on whatever services you use.
Increasingly, it seems like .mp4 is becoming the new gif. Its not quite as user friendly yet, but there's all sorts of advantages compared to .gif.
> I know enough about the debugging terminals in Chrome and Firefox to just "save-as" the .mp4 file itself.
Right, so this just don't work for like 90% of the population, or when you are on mobile, right?
Edit: what I mentioned about saving the mp4 and having problem later is: if I save the video and then try to re-share it on Twitter, it will be shared as a video, and not as a gif - or at least that was the case last time I tried
Can I just upload a H.264 level 4.0 video anywhere where an image is allowed and it will be displayed as an image? Will it be displayed as an image in any forum, chat/messenger platform? Can I use it as my avatar in places that allows for gif avatars, like Mastodon?
Hmm, I'm thinking about the Web-browser level (Chrome / Edge / Firefox) instead of say, web-application layer (ie: PHPbb vs XenForo).
The web browsers seems to have significantly improved compatibility of <video> in recent years, and even had decent compatibility 10 years ago (if you use Baseline profile H.264 3.0 videos and Javascript to smooth over some edges).
I've seen this debate happening for so many years, and people will not stop using gifs until other solution works exactly as a gif for the end user. APNG or WEBP would be a better solution, as they are images in the end.
You say they had decent compatibility 10 years ago, but no. At least a couple years ago you still needed many fallbacks and it was a hassle to guarantee the video would show properly. Not to say that you wouldn't be able to share it as image to tumblr/pinterest for example.
One interesting aspect of WebP support (or lack thereof) is that it doesn't work in Slack or Discord, even when those are running in a supported browser. I think things like this are inhibiting adoption of newer formats.
That is a way better solution, although APNG didn't help much for my test in terms of data, a 1.8mb gif was converted to a 1.5mb apng, not even worth my time to google gif to apng converter.
Webp worked well though, resulting in a 300kb webp. Might try to start using it
It sometimes feel like it is a bonus today, if apps bother to clean up their memory at runtime, so maybe that's why parent poster thought it is a good and special thing, that the OS free's the memory of an ended processes.
Btw. many people don't seem to know, that also in languages with a garbage collector like Javascript, you can create awesome memory leaks. And I would bet, most websites actually do: it only works, like it works, because websites are closed regulary. And because RAM is every increasing. But browse with a older smartphone and you hit the RAM limit very quickly.
I have 32GB of RAM and it sits unused most of the time. Right now I am at 4GB/32GB. It simply isn't a significant source of memory consumption. Open Atom and you can easily get to 500MB for a single application, which is completely wasteful. That browser can run dozens of apps in 4GB.
On the other hand, my browser (Firefox) keeps overflow through 8GB ram few times a day. Sometimes I wish people programmed like we had 512megs in a luxury machines.
I also have 32 GB RAM and right now am at 25 GB + 2.4 GB in swap. I'm at around 20 GB most of the time but always have at least 3 Firefox tabs open. Sometimes a buggy process (looking at you, Apple…) decides to go haywire and use 30-60 GB of virtual memory. I don't even notice that until I have a look into the activity monitor. Handling RAM spikes seems to be no issue at least on macOS.
In ff and chrome tabs are processes. So aside from resources allocated on behalf of that process by other processes not being cleaned up, the OS will cleanup all the memory that tab told the OS to allocated when closed.
Firefox user here, plenty of tabs, Win10 as OS. 3.7 GB before opening the GIF in new tab, 3.8 after opening, 3.7 after closing. Reopening and closing it several times in a row yields the same results, consistently. At least for my setup (Win10 heavily crippled to my own liking) closing tab == closing app in terms of memory gained back.
It would be fair for a browser to assume that if you’ve just visited one page that you might return soon, and so keep assets in cache for a little while.
A process which doesn't exist cannot hold memory. But the OS can certainly chose to defer the erasure as long as there's no better use for that memory. This is often done to speed up the performance of processes which are frequently quit/stopped and reopened/started.
> A process which doesn't exist cannot hold memory
Not quite. Some leaks are across processes. If your process talk to a local daemon and cause it to hold memory then quitting the client process wont necessarily free it. In a similar way, some application are multi-process and keep some background process active even when you quit to "start faster next time" (of act as spywares). This includes some infamous things like Apple updater that came with iTunes on Windows. It's also possible to cause SHM enabled caches to leak quite easily. Finally, the kernel caches as much as it can (file system content, libraries, etc) in case it is reused. That caching can push "real process memory" into the swap.
So quitting a process does not always restore the total amount of available memory.
This sort of thing is a known historical attack surface. At one point not too long ago, people were attacking Discord's browser and desktop clients by embedding massive carefully-authored GIF files to exploit the fact that Chromium (thus, Chrome and Electron) decodes GIFs partially or wholly in advance, so the GIF would quickly consume all memory available to the tab/app and either crash it or bog down the system.
I think Discord implemented some measures to guard against those files and Chromium was patched to mitigate this (which is why Edge and Vivaldi are fine), so it's not surprising that something like Safari might struggle with Evil GIFs under certain circumstances as well.
My GIMP process reached 278MB (93MB of which was shared). According to the Dashboard tab, 60MB of that was cache, including layer thumbnails and mipmaps. It's definitely not inflating the memory footprint more than expected. It's hard to count the total pixels, since every frame has different dimensions.
Thanks for the link. The issue seems to be the IOAccelerator framework:
> I’ve narrowed it down quite a bit (and submitted it to Apple as FB9112835).
> What’s going on is that the IOAccelerator framework has some sort of massive leak in it, where it’s using up 35GB of ram, 25 of which is going to swap (which is why you’re seeing kernel_task flake out).
> On intel, the same image only uses 1480K from IOAccelerator.
People have reported their SSDs filling up (in terms of total writes) much faster on Apple Silicon machines. If IOAccelerator can leak like this then it would definitely explain it. 25GB swap for one image is absurd. Multiply that by a few months of usage. It may not be a smoking gun but it is a fingerprint in the pool of blood.
Apple said that the kernel interface used by smartctl is emitting invalid data, which invalidates all conclusions drawn from it, such as “there is/isn’t a problem with SSD wear”.
> "While we're looking into the reports, know that the SMART data being reported to the third-party utility is incorrect, as it pertains to wear on our SSDs" said an AppleInsider source within Apple corporate not authorized to speak on behalf of the company. The source refused to elaborate any further on the matter when pressed for specifics.
We'll likely never hear anything else about this again from Apple officially or unofficially, so I don't expect anyone who believes there's an SSD wear issue to stop believing that there is. Either the combination of "smartmontools is emulating SMART access, but doesn't actually have it" and "a source at Apple said that smartmontools is incorrect" is enough to make this a non-issue, or it's not — and since most people who think that there is a wear issue don't realize the part about smartmontools faking that it has access to SMART data in this scenario (hint: nope!), I don't expect to find common ground.
So as far as I'm concerned, this is all irrelevant until someone's SSD wears out, and no one's reported that, so everyone is all tempest-in-a-teapot over some numbers that an open source tool is handcrafting from a macOS kernel API based on assumptions about Apple's proprietary hardware that are probably wrong. Wake me up when someone's SSD wears out.
That Apple comment is bullshit. We've confirmed that the TBW numbers from smartctl match the actual quantity of data written. You can also see the excess I/O in Apple's own Activity Monitor. The lifetime usage numbers smartctl reports are in line with what a high-end SSD would report, and there is no way for smartctl to "make up" the data. It's real data coming from the NVMe controller. There's no possible way to fake anything like that. That person is not authorized to speak for the company and is probably making stuff up.
Apple are aware of it, the bug is fixed in 11.4, and once the corresponding XNU source code drops I'll be happy to diff it and show you exactly what they changed in the swapper to fix it and debunk your "debunking".
Perhaps, but if you read the Twitter thread others are suggesting the same thing and people seem excited/happy that there is possibly a potential fix that may come from Gus discovering this.
So, maybe premature to get too hopeful but certainly not too soon to look in that direction?
People are suggesting the same thing because it sounds nice to be able to correlate them, but the evidence just does not exist yet. At the moment is seems somewhat likely that they are correlated at all, really.
Note that IOAccelerator memory usage doesn’t necessarily mean a bug in IOAccelerator. If my memory is correct, when an app allocates buffers for hardware-accelerated graphics, that memory is attributed to IOAccelerator. So it’s still likely to be a bug in some system framework that’s allocating all these buffers (especially since we only see the issue on one platform) — but an application bug is still a possibility.
Who else is excited that we are might revive the 80s Cambrian explosion of different system system and architectures? Back then there were so many options.
Hmmm... the chances for that are pretty slim I'm afraid. "Apple Silicon" is not a new system, it's just one of the large incumbents switching to another architecture (which is also not a first, this now being their fourth architecture, after 680x0, PowerPC and x86). In the desktop/notebook market, Wintel and Apple are firmly entrenched, with only ChromeOS and Linux challenging them - plus a few less significant OSes (FreeBSD, ReactOS anyone?). For mobile devices, we had a bit of a "Cambrian explosion", unfortunately followed by a very quick extinction, which left us with another duopoly. Here also there are free alternatives which however have very marginal market share.
As for actual CPU architectures, there are only two that really matter at the moment: x86/AMD64 and ARM. It's of course very cool that ARM has proved itself flexible enough to be used from (almost) the smallest embedded devices to supercomputers (not to mention Apple M1), but there's not that much diversity as there was in the 80s either...
Not only is it an incumbent switching to another architecture; it's an incumbent switching to another incumbent architecture. ARM is older than PowerPC and almost as old as the Macintosh itself; it came out in 1985.
Where the category "fish" isn't a clade - it's possible to evolve to no longer be a fish - it's more comparable to a specific generation of ARM chips, like perhaps ARM32, than it is to the ARM line in general. It would be weird to say "64-bit ARMv5" in the same way that it would be weird to say "lactating fish". But it is not weird to say "64-bit ARM" for the same reason it isn't weird to say "lactating euteleostome."
I gather that it's true that ARM hasn't been as good about backwards compatibility as some of its competitors, but was ARMv8 really so much of a jump from ARMv7 that one can't count it as part of the same line of processors anymore?
They weren't horrible either, AArch64 is incompatible with AArch32 but you can still implement both on the same chip with shared internals.
AMD didn't have to extend x86 the way they did, but without buy in from intel there was no way forward unless they went the route they did. Because unless both had agreed to shift to UEFI at the same time and agreed on an ISA it wasn't going to happen. This is why even a modern x86-64 processor has to boot up in real mode... because there was no guarantee that the x64 extensions were going to take off, so AMD had to maintain that strict compatibility to be competitive.
AArch64 had no prohibition, because there is no universal boot protocol for ARM. Insofar as the UEFI or loader sets the CPU in a state the OS can use then it's fine. The fact that there is one IP holder helped as well.
That said could AMD make a x86-64 processor without real mode or compatibility mode support? Yes they can. In fact I would hope that the processors they ship to console manufacturers fit that bill. There is a lot they could strip out if they only intend to support x86-64.
Short answer is yes. Just one significant example all instructions 32 bit long and no Thumb.
If you read Patterson and Hennessy (Arm edition) there is a slightly wistful throwaway comment I think that Aarch64 has more in common with their vision of MIPS than with the original Arm approach.
Elsewhere you've commented that it's more similar to x86 -> x64 than x86 -> Itanium - which may be true but Itanium was a huge change. However, Aarch64 is philosphically different to 32 bit Arm so it's not really like the x86 -> x64 at all which was basically about extending a 32 bit architecture to be 64 bit.
There's a sort of category problem underlying what you're saying though, perhaps fueled by the fact that ARM has more of a mix-and-match thing going on than Intel chips do.
aarch64 isn't really an equivalent category to x64, because it describes only one portion of the whole ARMv8 spec. ARMv8 still includes the 32-bit instructions and the Thumb. I realize you did mention Thumb, but you incorrectly indicated that it doesn't appear at all in ARMv8. As a counterexample, Apple's first 64-bit chip, the A7, supports all three instruction sets. This was how the iPhone 5S, which had an ARMv8 CPU, was able to natively run software that had been compiled for the ARMv7-based iPhone 5.
A better analogue to aarch64 would be just the long mode portion of x64. The tricky thing is that ARM chips are allowed to drop support for the 32-bit portions of ISA, as Apple did a few years later with A11. Like leeter said in the sibling post, though, x64 chip manufacturers don't necessarily have the option to drop support for legacy mode or real mode.
I think that's a fairly important distinction to make for the purposes of this discussion. I wasn't ever really talking about just aarch64; I was talking about all of ARM.
> Not only is it an incumbent switching to another architecture; it's an incumbent switching to another incumbent architecture. ARM is older than PowerPC and almost as old as the Macintosh itself; it came out in 1985.
> I gather that it's true that ARM hasn't been as good about backwards compatibility as some of its competitors, but was ARMv8 really so much of a jump from ARMv7 that one can't count it as part of the same line of processors anymore?
> I wasn't ever really talking about just aarch64; I was talking about all of ARM.
M1 is AArch64 only. You incorrectly brought ARMv8 into the discussion. AArch32 is irrelevant in the context of the M1.
Fair to highlight worse backwards compatibility but then you can't bring back AArch32 which Apple dropped years ago to try to claim that the M1 somehow uses an old architecture.
Is it? It's not like Apple moving MacBooks to M1 happened in a vacuum. M1 is only the latest in a whole series of Apple ARM chips, about half of which were non-aarch64.
That context actually seems extremely relevant to me; it demonstrates that Apple is not just jumping wholesale to a brand new architecture. They migrated the way large companies usually do: slowly, incrementally, testing the waters as they go. And aarch64 was absolutely not involved in the formative stages (which are arguably the most important bits) of that process. It hadn't even come into existence yet when Apple released their first product based on Apple Silicon. Heck, you can make a case that the process's roots go way back before Apple Silicon, all the way back to ~1990, when Apple first shipped the Newton.
Note, too, that the person I was originally replying to didn't say "M1", they said "Apple Silicon." In the interest of leaving the goalpost in one place, I followed that precedent.
I'd regard the fact no one seemed to notice that Arm has switched to a more modern 64 bit architecture (Aarch64) that has very little in common with its predecessors as being quite impressive.
We'll see. ARM architecture is now about 36 years old. I believe RISC V originated about 10 years ago. I think MIPS started about 40 years ago, but I believe it has finally ground to a stop.
Not sure why you'd say that - especially if you look at Arm v9 and the fact that the architecture is starting to make inroads into there server market.
RISC-V is open source which is great in some respects but also not helpful in others.
It's arguably a proto-RISC architechture (eg ADD has to be coded explicitly from CLC and one or more ADC, register file is memory locations 00-FF, etc), but it has little to do with ARM.
Edit: Granted, Sophie Wilson, one of the designers of ARM, is on record stating that 6502 didn't inspire anything in particular, beside being one of the few inputs to her pool of ideas (16032 and Berkeley RISC being the others): https://people.cs.clemson.edu/~mark/admired_designs.html#wil... So... arguably :)
powerpc/IBM is still a big player in the server/HP computing market. They do many cool things with their architectures since cost is less of a factor(dynamic smp, switcheable endieness, OMI) but they suck to build code for from an out-of-box experience standpoint.
This is the first I have heard of Apple doing this, and I feel like, in my position, I would have heard of this... I have just spent some time searching around myself trying to find any such reference and the closest I could find was the opposite: an article from Electrical Engineering Journal that said that Apple could have, but stated they didn't need to and pretty strongly implied they didn't, even going so far as to claim that they couldn't in any drastic way due redirections "even Apple" has on ARM licensees.
Can you provide some more information on this? I would love to be able to hit them on this, as this would actually be really upsetting to a lot of people I know who work on toolchains.
The rumor I've heard is that Apple is keeping their custom extensions to the ISA undocumented in deference to ARM's desire not to have the instruction set just completely fragment into a bunch of mutually incompatible company-specific dialects.
It's worth noting that the article you link predates the public release of the M1 by a good 10 months. Given how secretive Apple tends to be about these sorts of things, one can only assume that it was based almost entirely on rumor and conjecture.
Undocumented or not, they would be hard to hide: I would think you could scan through MacOS binaries and find them, if they exist. (I guess it's still possible they exist even if you don't find them, maybe unused or only produced by JITs, but that doesn't sound very useful.)
Yup. If you follow the links from that article, you'll get to the site of the person who found and documented them. It doesn't look like it took too much effort.
But it's not really about trying to prevent anyone from discovering that these opcodes exist. It's about trying to discourage their widespread use. If it's undocumented, then they don't have to support it, and anyone who's expecting support knows to steer clear. That gives them more freedom to change the behavior of this coprocessor in future iterations of the chip. And people can still get at them, because Apple uses them in system libraries such as the OS X implementation of BLAS.
Every ARM licensee does this though; they license the core designs from ARM and add features (including additional instructions) around it to package into an SOC. It’s just that Apple has the scale to design their own SOCs instead of buying one from Qualcomm or Samsung.
Which most - there is most as in number of cores shipped, and most as in number of organizations who have a license.
The second I have no doubt you are correct - I know of several organizations that have licensed ARM just to ensure they have a long term plan to get more without the CPU going obsolete again (one company has spent billions porting software that was perfectly working on a 16 bit CPUs that went obsolete - there was plenty of CPU for any foreseeable feature, but no ability to get more). These want something standard - they are kind of hoping that they can combine a production run with someone else in 10 years when they need more supply and thus save money on setup fees.
The first is a lot harder. The big players ship a lot of CPUs, and they the volumes to make some customization for their use case worth it. However I don't know how to get real numbers.
Back then code was usually closely tied to the hardware with very little abstraction. Nowadays even if you write in a low level language it's not difficult to target a wide array of devices if you go through standard interfaces.
Proprietary software is probably the main reason we haven't had a whole lot of diversity in ISAs over the past couple of decades (see: Itanium). It's no coincidence that ARM's mainstream explosion is tied to Linux (be it GNU/ or Android/).
ARM's first explosion was in PDAs, not running Linux. SA110 and XScale.
A ton of ARM hardware is embedded cores running VxWorks or EmBed. M0 through M4. Yes, Phones are the dominant core consumer here, but there is a whole bunch of embedded/IoT stuff shipping ARM cores every day that will never see Linux installed.
Back then C was a high level language. Programmers regularly dropped down to assembly (or even raw machine bytes) when they needed the best performance. Now C is considered low level and compilers can optimize much better than you can in almost all cases so more programmers are only vaugly aware of assembly.
Though you are correct, a lot of abstraction today makes things portable in ways that in the past they were not. The abstraction has a small performance and memory cost which wouldn't have been acceptable now, but today it is in the noise (cache misses are much more important and good abstractions avoid them)
> Now C is considered low level and compilers can optimize much better than you can in almost all cases so more programmers are only vaugly aware of assembly.
This is not true, compilers don’t generate super-optimized asm output from C. It’s actually not that optimizable because eg the memory access is too low level so there are aliasing issues.
But optimizing doesn’t actually help most programs on modern CPUs so it’s not worth improving this.
Look in the microcontroller space if you want more "diversity". There are 4-bit MCUs, 8 and 16-bit ones with banked/paged memory, Harvard architectures, non-byte instruction sizes, etc.
I would love to see a CPU Renaissance like this. Back then we had tons of variety, 680x0, x86, Rx000, various lisp machines, Vector computers, VLIW and Multiflow, Sparc, VAX, early ARM, message passing machines, 1-bit multiprocessors, Hypercubes, WD CPUs, and later an explosion of interesting RISC architectures... It was really interesting and enjoyable era.
As someone who programmed at that time, it was also very hard to write even small production programs.
Today I do things in a half-an-hour with Python that would have taken me days - maybe weeks! - to accomplish in 1978.
Each little vendor had their own janky tooling. Compilers cost hundreds of 1970s dollars (until Borland's $49 Turbo Pascal, over $150 in today's money).
Don't get me wrong. I was very unhappy when Intel dominated everything. The fact that ARM, an open-source architecture, is now eating Intel's lunch makes me happy.
But I'd honestly be glad if everyone just settled on ARM and were done with it. It was fun messing with all these weird processors (my first team leader job was writing an operating system for a pocket computer running the 65816 processor!) but it meant that actually generating work was very slow.
I mostly agree with your overall argument, but the "mostly" qualification goes along with a small but important correction:
>The fact that ARM, an open-source architecture
ARM is in no way open, it's fully proprietary. Unlike x86 it is not vertically integrated and is available for anyone to license all the way to the architectural level, and that's huge. But said licenses certainly are not free either, nor Free.
There are promising actual open architectures, in particular OpenPOWER and RISC-V come to mind as interesting with a lot of solid work behind them. So that's one small remaining opening IMO, even if it's more work on the dev side I wouldn't mind having those stick around and get more competitive.
Picking a CPU is not just about the CPU architecture. It is mainly about the ecosystem around that processor. ARM has a huge amount of IP, bus fabrics, compilers, operating systems, boot loaders, and people you can hire with knowledge of all of that. There are far more people out there with ARM experience than SPARC. I don't really see anybody interested in POWER outside of IBM and the chips they sell.
My bet is we will have a small explosion of cheap consumer laptops running ARM, but more as a marketing ploy to ride the hype train around Apple computers with ARM better being much better than Intel. (even though those ARM chips won't compare to the Apple Silicon, but like I said, sales).
CPU hasn't been the limiting hardware in a decade. I think Intel stagnated because people have prioritized spending money on GPUs, memory, and SSDs.
Even when I'm writing an intensive program, I'm using multiple cores, so a single threaded benefit is useless to me.
I have a half a mind to think the m1 is a marketing gimmick because making a better processor was low hanging fruit that CPU companies aren't trying to compete on(outside of price).
Maybe, but this seems to be a bug with Apple's IO toolkit on x86, so it's unrelated(other than x86 support on macOS already falling apart, which is completely unexpected, considering the quality of the rest of the OS after recent releases).
My first thought was that each "frame" of the GIF is being expanded/rendered to its own backingstore.
GIFs are pretty optimized files — where each "frame" can be a diff from the previous. "De-diffing", converting palette-based pixels to full 24 or 32-bit RGB could really blow up fast.
From a codec perspective gifs would be the first thing that came to mind for a poorly optimized format. The only thing it has going for it is simplicity. This is a bit besides the point you're making, but I did a double take at seeing the words "gif" and "pretty optimized" together. I'm also curious as to what this has to do with M1 in particular. Looks to be like a memory allocation issue in the decoder library. But hard to tell given the amount of information given. Memory leaks tend to be boundless, but increasing, so the op statement of a fixed memory usage kinda sounds more like poorly optimized allocations, rather than non-properly-cleared up allocations. Though, could also be a mix of everything.
"Optimized" in this sense meant that animated gifs can have a frame reference only three pixels of the original image. So an image of 300K with only small movement (think cinemagraphs) wouldn't be much larger.
This is a given for movie formats, but at the time the animated GIF came up it was revolutionary. I think the proper phrase should be "animated GIFs can be pretty optimized, taking into account how inefficient the algorithm is, when compared with other animation algorithms of the time".
I also think there's an interpretation that applies here: When you see an animated gif, even if it's a frame that changes three pixels and nothing else, internally the renderer may be expanding it into a full movie (that is, uncompressing each resulting "frame"). This usually makes GIFs (regardless of how large or small the GIF actually is) take much more memory than common sense would tell you.
Looking this up led me to some interesting unrelated facts about gif animation:
1. animation isn't technically intended by the gif format [0]:
> Although GIF was not designed as an animation medium, its ability to store multiple images in one file naturally suggested using the format to store the frames of an animation sequence. To facilitate displaying animations, the GIF89a spec added the Graphic Control Extension (GCE), which allows the images (frames) in the file to be painted with time delays, forming a video clip
>
> To enable an animation to loop, Netscape in the 1990s used the Application Extension block (intended to allow vendors to add application-specific information to the GIF file) to implement the Netscape Application Block (NAB).........Most browsers now recognize and support NAB, though it is not strictly part of the GIF89a specification
2. SMIL[1] is an animation alternative I'd never heard of for the browser.
The Pillow library in Python detects for these sorts of attacks and just throws an error if an image tries to decode into something that's about 10x the size of a UHD image. You run into these foils if you start dealing with scientific data and have to override these protections.
No. There is no way this has anything to do with that. That's an issue with the OS swapper algorithm, and it should be fixed in macOS 11.4 according to reports.
I think the grandparent meant that if there is some leak in IOAccelerator.framework that causes excessive memory use, perhaps for some inputs this leads to swapping, leading to more SSD writes.
The people with SSD thrashing didn't experience excessive application memory usage, so that doesn't add up. By all indications it was an issue where the kernel aggressively swaps in and out under some conditions, even when real memory pressure isn't that high.
Memory leaks don't cause swap thrashing most of the time; the leaked memory gets swapped out and then just sits there, as it is unused (hence leaked).
> "That's an issue with the OS swapper algorithm, and it should be fixed in macOS 11.4 according to reports."
I hope so! My MacBook Air has been running over 10 TB of writes per month. Considerably more than my old Intel MacBook Pro, which averaged 2.8 TB per month. Both 8GB machines.
It's enough that I'm worried it could start to see degraded performance after a couple of years or so. That already seemed to be happening on my Intel MacBook after only ~120 TB writes (256GB SSD).
Until now most laptops sold everywhere, including high end models, where 16GB and below. 32/64 is a tiny niche (and special built to order option in most cases), even for video and music editing people.
Sure, 32GB would be nice, but let's pretend this is some huge issue for but a small minority that runs several VMs simultaneously or such.
Not to mention the M1 machines released thus far (Air, Mini, 13" Pro, and 24" iMac) are the lower end of the line - the kind of machines that people wouldn't tend to update to 32 even when it was an option under Intel (which itself, is not that long ago).
Oh, the app happens to use 5 instead of 1 background processes and nobody notices (using 50+ MB of RAM each, it's not nothing) and the bug just languishes.. :)
Could it be that not all tabs are loaded into memory at all times? I use Firefox with Sidebery and a boatload of tabs, most of which aren't loaded at all.
They do, you just notice it less. MacOS is notorious for having one of the most asinine memory management schemes in the history of software, and so causing a memory issue can be a bit of a finnecky task (but certainly not impossible). As a matter of fact, most times you don't even need to fill swap before MacOS runs out of memory: you just need to fool the OS into thinking the memory pressure is high enough to warrant GC.
>As a matter of fact, most times you don't even need to fill swap before MacOS runs out of memory: you just need to fool the OS into thinking the memory pressure is high enough to warrant GC.
I do DAWs (with tons of VSTs and sample libraries), VMs (vagrant, docker) and NLEs (up to 4K), but usually not at the same time, and I've never run out of memory in macOS ever in ~20 years. 16GB is the largest amount of RAM I ever had in them.
How often does this mythical "macOS runs out of memory" thing happen?
>MacOS is notorious for having one of the most asinine memory management schemes in the history of software
I’m not saying a byte isn’t a byte. I’m saying that if I don’t notice it, then I don’t need as much. I routinely swap 6-8gb and I can’t imagine this thing being faster. Most interactions are near instantaneous.
So my ram needs are lower precisely because I don’t notice it.
So the question is what share of the market this is true for? When it comes to laptops, I'd say not that larger (relatively, in absolute numbers it might have doubled, e.g. from 1% to 2%) than what it was in 2019 or 2020.
The iPad Pro has 4GB with almost the same CPU and is absolutely snappy editing 4K videos, instantly switching in and out of apps etc. It’s a matter of software architecture and not raw space available.
Editing 4K videos doesn't strike me as particularly RAM intensive if you optimize the common behaviors (seeking, etc) properly in software and have a fast SSD.
Just purchased my first Mac after using Linux for most of my adult life. How big of a concern is this? I am seeing mixed reports when looking around - should I be questioning my purchase if I was expecting my M1 Air to last a few years?
As you said, mixed reports (https://news.ycombinator.com/item?id=26244093) and not much followup since then. The most common worst-case numbers from that thread, if sustained, would indicate a 4-5 year maximum lifetime.
If this actually turns out to be a widespread problem and Apple doesn't address it in a timely update you may see a class-action lawsuit and/or Apple warranty repair service in a few years.
Yes, it sucked that it took so long to redesign it. The bad keyboards started appearing around 2016, and it wasn’t until 2019 that a redesign first appeared, on the new 16” MacBook Pro.
But Apple have always been good about free, no-questions-asked, out of warranty replacements for faulty/sticky butterfly keyboards. Not really sure what more you could reasonably want them to do.
Because they extended the warranty replacement to four years for the keyboard? Once that's done and now they are going to push people towards buying entirely new models since an out of warranty keyboard repair on these things is something absurd like $600-700 IIRC.
Trust was eroded because every year they came out with some improvement on it that was supposed to fix the problem, then at the end of the year models with those keyboards were added to the expanded warranty program.
I think it would have been better if they had a real fix for this issue, it probably would have been expensive for them with a redesigned top and bottom case to fit a decent keyboard but right now the butterfly keyboards seem like ticking time bombs. Eventually they will die and Apple will either ask several hundred to repair it or say parts no longer exist and buy a new one.
>Not really sure what more you could reasonably want them to do.
1 Admit the issue before a class action lawsuit is running
2 Put immediately a statement that there "could" be an issue and it is investigated. Apple kept their mouth shut and the fanboys attacked the people that reported the issues that they use the keyboard wrong or some even claim that is is an anti-Apple conspiracy.
"it's okay that Apple made this mistake, because they were always willing to replace your useless trash when it broke for you!"
Maybe Apple should have just swallowed their pride and addressed it in a single product cycle. I was one of the people waiting for the keyboard to be fixed before buying a Mac, and I just ended up switching to Linux before it happened. I ended up buying an M1 Macbook Air, but it doesn't get much use these days besides multiplatform testing.
For Apple Laptops, more like 7-8, at least in the old days.
Typically you would only replace them when they (gradually) became annoyingly slow for daily use.
Agreed, I had to upgrade all of of my PowerBooks/MacBooks at some point with RAM and larger HDDs/SSDs.
And my last Apple Laptop is from 2012 and I learned that Apple now drops support by the OS considerably earlier than in the old days (i.e. unecesarily early, most 8-10 year old MacBooks would be perfectly fine for daily use, but now unsafe) - at least that was my impression.
And installing Linux on them is always a mixed bag (fans, trackpad, etc.), even though it sure is better with Intel Macs.
I bought a macbook air in ~2012 and it was unusably slow after a year. bought a thinkpad to put linux on after that that I'm still using today (although admittedly it's a bit of a wreck now - still, it lasted for 8 years)
The MacBook Air was not really a "high performance" machine to start with though - you can't compare that with a ThinkPad :)
And it was seldomly updated, so you could get very aged specs.
If you bought a MacBook 2011/2012 you would typically get an HDD and between 4 and 8GB RAM. Software requirements/demands sky-rocketed shortly after that. On a non-Air you could at least upgrade this yourself, which gave the machine new life ("just like new")
it did have a 128gb ssd. The thinkpad was also an x1 carbon (also 4gb of ram), so it had a similar form factor, not sure about cpu specs, I probably should have mentioned that.
Interesting, I wonder why mine slowed down so much then. From what I remember it was after an OS upgrade, it was my first time trying a mac, and I ended up being disappointed and going back to linux, but I also didn't do a deep dive into figuring out the reasons for it.
I believe the 2011 era MacBook Air (EveryMac.com confirms only the 11”) had an entry level 2GB RAM variant. That one probably got pretty painful after just a couple OS updates.
My HP laptop is 7 years old and still works well. 4xxx i7 8 threads, 32 GB RAM, 2 TB SSD (upgrades, of course.) A new laptop with a newer CPU and NVMe would be faster but it's still subjectively fast enough for my work. I'll upgrade when it breaks down and I won't be able to repair it. I keep an eye on candidates.
Not anymore. Now Apple will drop MacOS support for their computers way earlier. I tried to update my barely used MacBook from 2012/2013 somewhere last year and learned that it was dropped quite some time ago - so if I would have kept it on the newest release (there were serious bugs, I was hesitant), I think it would have got around 8 years max, if memory serves me right. No security updates at all anymore, not possible to download older release than the newest. I did not expect that.
And I had to upgrade SSD and RAM 2 and 4 years after buying, it became unusable for work (granted, it was not spec'ed out originally).
MacBook Pros and Airs from 2012 are supported by Catalina, and that is still getting security releases. It does require 4GB of RAM though if you didn’t have that to start with.
If you bought your MacBook until MID 2012, you're out of luck though.
And 10 years old would be early 2011, and those are also not supported.
Mojave has the same requirements basically, that leaves High Sierra, and High Sierra had a few months of support left (ended Dec 2020), so I just aborted.
Obviously that depends on a number of factors, including whether you purchase early or late in the generational cycle, whether you spring for the extra memory, total hours of runtime, exposure to rough handling / mechanical stress, and of course blind luck.
Not anymore? It's not like battery replacement is very viable on modern Macs, and the SSD wear issue means that most of these "daily driver" machines will end up dead in more like 3 or 4 years.
But yes, older Macs were notorious for being great machines.
> It's not like battery replacement is very viable on modern Macs
Why? Apple still offers battery replacements for modern Macs, and some more adventurous types of people still do it themselves. They're glued to the chassis, not spot welded.
Apple SSDs have historically had a lower TBW than the rest of the drives on the market, and combined with their swap abuse issue right now, I think it's fair for people to be alarmed.
Not if the computer becomes a paperweight after those 5 years. If Apple wants me to consider a Mac, they need to make the NVME user-servicable, no exceptions. If the current chassis leaks are true, Apple has no excuse not to use the extra space inside their Professional(!!!) machine to give it a relatively standard feature found in laptops half it's price. There's no excuse anymore.
Still I think a lot of Thinkpads need some parts (often the motherboard!) replaced before they run out of extended on-site repair warranty. The small sample from my co-workers seems to indicate that the rate for that is over 50%. Maybe I just happen to know all the people who use their laptops as shovels.
That said, my 2015 HP Zbook (previously in contracting work, now personal use) still works perfectly, only now with third keyboard, third battery, and a bit of superglue.
From what I could understand, it's related to how much of your workflow runs over the RAM capacity, at which point the memory gets shuffled off to the SSD, and loaded back when it's needed.
I got the 8GB RAM version, and been mainly using it for Unity + JetBrains Rider, both demanding around 4-6GB by themselves. Doesn't help neither of them are ARM native, I'd imagine. So I'm in big trouble :)
A suggestion until ARM native Rider is available. We have a few 8GB M1s that are used for .net development using Rider and we're using the DataGrip swap hack proposed here and it's working great (was unusable without it):
Huh that's a funny hack, will give it a try, cheers!
I wonder if the GoLand/ PyCharm JBR folders will work, since I already have those and they are both ARM native. It's definitely the UI performance that kills me, have no issues with the other editors on the laptop otherwise. Running Unity in parallel likely doesn't help either, though.
A YouTuber investigated (but no shell commands AFAIK, observing through Activity Monitor), but seems like Rosetta 2 apps might also contribute to the high disk usage.
Activity Monitor is not a reliable way to inspect memory usage, it's optimized for speed of calculation. Use 'footprint' and 'zprint' or Xcode/Instruments instead.
Activity Monitor is almost just as useless as Task Manager in terms of reporting memory footprint. Not only will Apple's API constantly hide resources from native apps (like how Safari conveniently hides it's rendering processes), but MacOS's memory model is completely at odds with their measurement techniques. Either way, you're better off using top to measure your system's footprint, if anything.
Apple's APIs don't hide anything, programs on macOS just spawn subprocesses for reasons that mostly have to do with security. top(1) is not a very good way to measure footprint, footprint(1) is (and Activity Monitor uses the same APIs internally for its "Memory" column).
As far as I can tell, Activity Monitor and footprint(1) both grab the phys_footprint field from proc_pid_rusage. I have not seem them diverge yet, so if there is more to this that I am missing I (as the author of a system monitoring tool of this sort) would be interested in hearing about it.
See "there’s more columns than that". The single process "memory" column does match but AM doesn't show coalition (multiple process) memory totals, the compressed column is an estimate, footprint --vmObjectDirty also exists and is a valid way to look at things, etc.
Ah, I see what you mean now; yes, it's not reliable in the sense that there are ways to measure it that look a little harder. That being said, I'm not sure I'd classify it as "inaccurate"–it has, in my experience at least, been a fairly good first approximation of memory usage.
only anecdotal at this time, and it’s been awhile now. Specifically, a video editor posted a blog about how their laptop failed which required an out of pocket board replacement. Someone speculated in comments here that the drive may have reached lifecycle but it wasn’t confirmed or examined by an expert in a write up or anything. Not sure specifically what hardware but no reason to believe it would be a proprietary drive any different than other laptops’. IMO skeptical of PEBKAC or clickbait. Nothing in the post was informative or illuminating. > last a few years It will and to be insured instead of just confident one can opt for that $200 warranty extension
11.4 is in beta so not many people are using it, but at least one of the folks with the issue is running it and said it improved things.
Apple were definitely made aware of it, hence why it being fixed in 11.4 makes sense, though there is no official statement that I'm aware of.
I was never able to reproduce it myself; we never found a specific trigger, but some people have the issue consistently and others (most) don't. I only managed to trigger thrashing with very blatant memory pressure (i.e. allocating most of the system capacity and continuously reading it to keep it hot), which obviously isn't what these users were doing.
People have looked at the OSX swapper code, and there were some hints that the algorithm it uses to decide to swap may have had some bugs; if 11.4 fixes it then I'm sure we'll find out once the XNU source drops and we diff it. Nobody has actually tried to root cause this outside of apple (i.e. using debug XNU builds on an affected workload/user).
Also, we never confirmed that this was an M1 exclusive issue. There's some evidence that this is a Big Sur regression that affected all Macs, it's just that the effects aren't obvious on Intel ones because the age of the SSD makes it hard to draw conclusions unless you're actively watching lifetime writes over the course of weeks. On M1s, since the machines are young, the problem is obvious with a single data point.
I still don't see how this can be classed as a bug by anyone other than apple.
You have a single case of 10% lifetime usage (plus a 20% one you mention), along with thousands of reports of people with 2-5% - which you also stated was too high - based on your insistence of using TBW (which can vary by up to 10000x depending on the tech) instead of percentage used (supplied by the manufacturer).
I had an out of memory alert on my machine earlier because i opened a typescript file in VLC. It was using 26GB of memory (and climbing) when i noticed it and killed it. I have an 8GB RAM machine. The machine remained fully responsive throughout. That simply wasnt possible before.
Its definitely swapping a lot, for sure, but don't you think that there is a possibility that this is by design, sacrificing disk writes (i am 50TBW and still ONLY 2% "used" on a 256GB drive since launch) to make app switching more responsive?
I guess we will see when you are able to diff the source, and you can shut me up once and for all :)
It is by design, but not that much. That's the point. The machines are designed to use swap and memory compression to greatly enhance responsiveness even with less physical RAM than competitors. And that works well for most users. But there's a bug in the heuristic, and for some users, it starts pathologically swapping.
We've seen the numbers go up in the activity monitor. Even while doing ~nothing. Fast. That is obviously a bug. Even with some Electron apps open and such, I guarantee the working set of active apps was nowhere near the physical RAM size. And so, that's a bug.
Terabytes per day of swap activity is not normal, no matter how much these machines are designed to swap on purpose.
As I said, there's one user with 20% usage as reported by the drive. That's not TBW, that's real (they're at >500 TBW, for what it's worth), and it means that machine is going to have a dead SSD within 2 years if the issue isn't fixed.
> Terabytes per day of swap activity is not normal, no matter how much these machines are designed to swap on purpose.
1TB is only 62.5 * 16GB. If its paging out 8GB+ apps (quite easy for chrome with a number of tabs) it only takes one memory hog to increase the TBW in a few hours of typical app switching for a mobile app developer.
This edge case is pretty extreme, sure, but its still a MINIMUM lifetime of 2 years. It doesn't mean its suddenly going to die when it hits 100%, and even if it did it should be covered by warranty. And this usage is an order of magnitude more than the vast majority of other reports that were made.
Im inclined to think its a non-issue, but totally respect your position.
As an aside, I use tab suspenders on my browsers - habit from my intel mac where chrome frequently caused memory congestion. Its probably why I get away with running 2 iOS simulators, an android emulator, xcode, intellij, 3 vscode instances, safari, firefox and chrome, and a bunch of utilities and services on an 8GB machine - but ill still be first in line for a 32GB+ 16+ core machine, because then ill be able to run VMs :D
A user having high drive usage doesn't make it an issue, let alone the same issue.
That user you linked to is using Catalina (as they mention in their twitter thread where they demonstrate a 3% usage increase over 2 weeks), so it will be completely unrelated to the support for silicon, which wasnt added until Big Sur.
Swapping isn't CPU-intensive, and Apple also implemented the memory compression as custom CPU instructions. Swapping is I/O intensive, and these machines have stupid fast SSDs which is why they can get away with it.
When you think of swapping as slow it's not because it eats CPU, it's because it blocks on I/O.
Its all very interesting. I wish apple would be less tight lipped about how it all works together. Theres so much guesswork because we don't fully understand how the new arch is being utilized.
With a maximum memory size of 8G it's a bit anemic so you likely wouldn't get that much life out of it anyway, hope you got the 16G version. Apple is pretty good at the planned obsolescence game, so getting the larger memory would at least help stave that off for a bit.
I'm skipping these for now, I run Linux on all my machines and it typically takes a while for the wrinkles to be ironed out, and x86 has much better support than M1. I do think it is time that we became less fixated on x86 and more CPU architectures is better. Another reason for the skip is that the last two Apple products I've owned (both MacBook Airs) have not lived up to expectation, the one had a keyboard that went bad after only two years with nothing but perfectly normal use, the other has a battery that didn't even go through 50 full charge / discharge cycles and that only holds 5 minutes worth of charge. Both of these issues developed out of warranty.
> Apple is pretty good at the planned obsolescence game
Nah, they suck at it.
My GF is just finishing her bachelors thesis in the living room on my 2013 MBP with 4 GB RAM. First battery, updated all the way from Mavericks to Big Sur. Still supported, still useful and prettier than 80 % machines out there.
Just out of interest, is she still using the same HDD? I find that the performance of older MacBooks are pretty awful under recent version of Mac OS if the storage hardware is the older non-SSD drives. My old $work MacBook with a HDD was thankfully swapped out for a SSD version -- the difference was night and day. Though this SSD machine is starting to slow down noticeably with Big Sur...
Since 2012 the MacBook Pros switched to all Retina and all SSD so yeah it's got an SSD. Anecdotally my 2012 retina is still doing great, also on Big Sur. Battery is pretty shot though.
You're right. Macs have excellent build quality and are durable as hell. Planned obsolescence with Apple is seen more with iPhones. There was the recent story about Apple slowing down older iPhones via software updates [1]
This isn't a case of planned obsolescence. If anything it's exactly the opposite of that, trying to prolong the useful life of a phone as its battery degrades. This did of course lead to them offering a $29 battery replacement for affected phones (after they were caught doing this).
The lack of communication was a problem but "planned obsolescence" in this case is just tin-foil hat nonsense.
It’s 16G and I just retired my 2012 MacBook Pro because I really wanted to try the M1.
It was still going strong after almost a decade of use and got €300 as a trade-in. If that’s Apple‘s idea of planned obsolescence, I support their plan.
Same here. The home iMac Core Duo from early 2008 was finally decommissioned last year, after 12 years (had an SSD change only). Up until that moment it had been the "go to" computer in the home and 24/7 running a Plex server, Sonarr and Transmission.
My 2012 MBA is used daily and heavily by my mother and graphic designer sister. My 2015 MBA replaced that iMac and now I'm trying to find an excuse to replace my current 2019 MBA because M1s are reaaally attractive. Essentially I've convinced myself Apple is anouncing laptops later in the year that look like the newest iMacs, just so I wait.
>Apple is pretty good at the planned obsolescence game
If that's their game, they are terrible at it.
Seeing that Macs hold their resale value for longer, for mobile they release iOS updates for longer than any other Android vendor, and so on.
What Apple is good at is making the product an "Apple only can fix/change" affair. But that's not the same as planned obsolescence game.
>With a maximum memory size of 8G it's a bit anemic
Is it? The majority of non-Apple laptops in 2021 are sold at 8G and below. And for people's use (web, surfing, email, Slack, Zoom, regular apps, etc) that the Air and smaller MBPs are aimed at, that has always been plenty.
(In fact, outside of video, audio, 3D, VM, and number crunching work, it's crazy that people would need more than that for the same stuff, computationaly wise, we did 10-15 years ago with much less RAM - blame Electron).
And 16GB and below is something like the 95% percentile or above.
Agreed, I'm going to stay on android til Apple puts a USB-C port in the iPhone. Most of my colleagues use iPhones and they're supported way longer than our android counterparts(Samsung exclusively).
That said, was this upgrade to Android 7.1 officially supported by Samsung?
In 2015, Apple released the iPhone 6S. It has not stopped receiving day-one software updates. For anyone who still has one (two in my circle of family and friends still have their 6S) they are still working great on iOS 14.5.
All the M1 Macs can be upgraded to 16 GB, including the Air, and macOS uses memory compression by default, so there can be a bit more life to these machines than you'd otherwise expect.
Edit: Since posting this comment, the parent has been edited to include "hope you got the 16G version" -- at the time I replied, jacquesm's comment mistakenly asserted an absolute "maximum memory size of 8G."
It's a custom build option, I'm guessing you're onto the page with the couple of preset options? If you click "select" on a model you get a second page that lets you customise stuff like RAM and SSD.
Back when the DTK was available I've found that CIImage allocated with imageWithBitmapData was leaking, and moved to CGImageCreate and CGImage instead.
Looks like apple is using nvme as an infinity swap. This helps explain how 8GB feels enough and will probably be cause of future pain when those chips begin to wear out.
My macos knowledge is dated (didn't use it seriously since 10.5), but it used to be a dynamically growing collection of swapfiles, usually covering all of the RAM (probably can't do that anymore).
MacOS Big Sur: recently implemented an algorithm that leaked and left it running for quite some time. Got notification that there is not enough RAM and suggestion to close some apps once size of swap grown up to 64GB (or it was size of RAM + swap, don't remember).
This makes me want to make good on a joke/threat to make a Linux distribution that is incapable of displaying animated GIFs. It will also block emojis.
Seems to just be a memory leak in a framework the editor is using, that seems to only affect M1. Which is mildly interesting but a bit of an anticlimax...
The framework is part of Apple’s UI stack and indicates a window system/graphics driver bug somewhere in macOS. So it’s probably more than just “an application has a bug”, but still not very interesting until we learn more.
> My point was nowhere nearly as interesting as if a specially crafted gif somehow triggered a memory leak in a driver when rendered.
There's absolutely no mention of "specially crafted" GIF or "when rendered" in the title. You're projecting your expectations which have nothing to do with the story and are not even hinted at by the title, and choose to be offended that they get betrayed. There are many things I'd like to retort here, but the most important is that it's against the HN guidelines, so it would be good if you stopped.
The default thought if someone says they opened a gif isn't "they splayed open the hundreds of frames for individual manipulation" anymore than someone saying "Car moved from point A to point B" would mean "Car disassembled and moved piece by piece from point A to point B"
And I don't recall the rule asking you to backseat mod :) Nothing against the rules in explaining simple nuances of written word
I wish I was a mod! I could easily check just how many downvotes this whole thread gave you. Now the only thing I can do is to soothe my eyes by looking at all the wonderful shades of grey here.
Where do you see anything about individually opening 730 frames, and why would it be unsurprising for that tiny load to cause a bug? Have you heard of video editors? The same machine can edit four 4K@60fps streams.
Thanks for making my point, the program is indeed opening individual frames.
And what are you on about after that? No one said it's unsurprising for that to cause a bug. It's saying that the gif is not cause here, an editor is coming across a framework call that's blowing up
Call it an editor bug, or a framework bug, it's not an image using the memory, it's the editor
Oh my god. I get it. You’re reading the headline as “this GIF uses xx memory”. That’s not what is says. It’s “GIF uses X memory on x86, X*10 memory on M1”. You’re picking a fight based on your own misreading.
730 frames of a 400x240 pixel image at 8 bits per pixel (which is all GIF allows) is 70MB (plus 768 bytes for the palette). We should be able to load each frame of the GIF many, many times before we reach even 1GB of RAM, let alone 35GB.
The x86 version of the same editor can open this GIF in the same way with reasonable memory consumption. This is clearly an issue with either the image editor, or as seems to be the case (based on investigation in the Twitter thread) an issue with a macOS system framework the editor uses.
Not sure why you feel the need to be so belligerent on this. All the information we need to identify this as a problem specific to the M1 version of macOS is in the first tweet, with more details that give us exact numbers in a follow-up.
Uh-oh, this is sooo important, some title on the Internet is wrong!
If I read it correctly, the same app, compiled from the same code, for the same OS, but different CPU architectures, exhibits wildly different memory consumption for the same task. That's what's interesting here, at least to me, but you chose to nitpick on the title instead, just because - from what you write - it offended you somehow by not including the word "editing" at the beginning.
I mean, sure, go on, have fun, but at least don't expect your posts not be downvoted.
If you're actually following it proves exactly what I'm saying, the gif has nothing to do with it, Firefox isn't blowing up opening that gif, it's hitting whatever framework leak exists.
This is like saying "DOCX bluescreens Windows" (which implies something that could possibly be exploited in some pretty scary ways) when in reality Word just happens to make some syscall that bluescreens Windows no matter who calls it
I don’t know why you made up your mind on this without having any details.
The person reporting is the author of a graphics editor, he damn well knows the difference between a bug in his own editor and system frameworks. The issue seems to be with opening the GIF via default system frameworks, not anything special the editor is doing. Chances are it will affect any other programs using the same frameworks, maybe even GIF viewers.
"DOCX bluescreens Windows" is exactly how your hypothetical scenario would be described.
"Bush hid the facts" is described as a Notepad bug, even though it's a bug in a specific WinAPI function used by Notepad and thus would also affect other programs that used that function. It was discovered on Notepad, it became popular as a bug of Notepad, and so it's considered a Notepad bug.
It'd be kind of crazy if there was something inherent to the TXT that when opened in any program made that bug occur! But the issue was with Notepad! Much more believable!
Windows bug would be acceptable too, or WinAPI bug :)
This isn't that complicated, I don't mind teaching y'all how to not write non-clickbait headlines
AFAIK GIFs for actual video content (rather than small animated (emot)icons) went obsolete in about 1995, when the first RealPlayer video plugin was released.
Can someone explain to me why people are still using them for that 26 years later, especially now that we have much better formats like webm, mp4, or SVG, and will hopefully have AV1 hardware-supported in a few years ?
From personal experience, it's because not everywhere I want to share a video inline supports proper video files, but they more-often-than-not support gifs. Github was once such example; but now that they support rendering videos in PRs, I've stopped uploading gifs.
It's the only form of video which 1) browsers universally treat as an image file and which automatically, reliably and seamlessly loops and 2) also has wide support outside browsers and user familiarity.
The auto-start and auto-loop might or might not be wanted.
The main issue is just how freaking inefficiently huge gifs are for this use case : This whole thread, but also I was seeing an order of magnitude difference in a recent use-case I had, compared to mp4, which would have made sending the document by e-mail otherwise not viable.
(And imagine still living somewhere where a Mo of data doesn't have a negligible cost of transfer...)
I know that GIFs are very inefficient for what they're commonly used for, but your question was why people still use them. The fact they are “images” that are animated and loop is the whole point.
Ristretto: 48.8MB. EOG: 51.3MB. ImageMagick: 169MB. Firefox: 305.1MB. Vivaldi: 185.2MB. Edge: 196.5MB. GIMP: 244.8MB.
OS memory usage remained relatively flat once app exited. xUbuntu 20.04.