I think there's two main avenues for hardware acceleration: pointer provenance and garbage collection. The first dovetails with things like CHERI [1] but the second doesn't seem to be getting much hardware attention lately. It has been decades since Lisp Machines were made, and I'm not aware of too many other architectures with hardware-level GC support. There are more efficient ways to use the existing hardware for GC though, as e.g. Go has experimented with recently [2].
There are algorithms to align allocations and use metadata in unused pointer bits to encode object start addresses. That would allow Fil-C's shadow memory to be reduced to a tag bit per 8-byte word (like 32-bit CHERI), at the expense of more bit shuffling. But that shuffling could certainly be a candidate for hardware acceleration.
There is a startup working on "Object Memory Addressing" (OMA) with tracing GC in hardware [1], and its model seems to map quite well to Fil-C's.
I have also seen a discussion on RISC-V's "sig-j" mailing list about possible hardware support for ZGC's pointer colours in upper pointer bits, so that it wouldn't have to occupy virtual memory bits — and space — for those.
However, I think that tagged pointers with reference counting GC could be a better choice for hardware acceleration than tracing GC.
The biggest performance bottleneck with RC in software are the many atomic counter updates, and I think those could instead be done transparently in parallel by a dedicated hardware unit.
Cycles would still have to be reclaimed by tracing but modern RC algorithms typically need to trace only small subsets of the object graph.
The TI PRUs demonstrate that what you need for hard real time is 2 processors and the ability to completely transfer your register file in a single clock cycle instruction.
This lets your first processor be deterministic hard real time while your second processor is soft real time.
I really wish somebody who makes M-series microcontrollers would get the message already.
I'm missing comparison of FL2 on Gen13 vs Gen12, since this would be a real win (or loss?) from the hardware upgrade. How can they justify the upgrade without this data?
Shouldn't it be: no more negligible manufacturing / assembly tolerances instead? I mean, when I turn PC on, the temperature of all components is 20 C, the training is done at almost this temperature. But then the PC can work for months with much more higher memory controller and DRAM chips temperatures.
Samsung will switch from monthly to updating less and less often over the age of your device. Your device will be vulnerable to known security issues but Samsung will stick to their once every 3 months and sometimes once every 6 months update schedule. I found this out after my premium Samsung tablet sat vulnerable for months.
That's true, but for the price and compared to non-Samsung they are doing really well. Our daughter's A54, which was a bargain at 300 Euro, is still getting monthly updates after three years and looks like it's still getting them for at least another year (since A53 is also still supported).
Though for price vs. updates it's hard to beat the Pixel 9a. It's currently often ~349 Euro and gets updates until April 1, 2032.
That's nuts. My iPhone 13 actually feels quicker after the iOS 26 update (and this is the first time I think I've said that about an iOS update since it was iPhoneOS / single digits)
Part of the issue with the 16E is that is is using a binned A18 chip. when I heard it was using the A18 chip I decided to buy it, but it seems the GPU sucks, and Glass is so GPU intensive so...
Interesting, how costly would be hardware acceleration support for Fil-C code.