Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This reminds me of the Amiga which had FastRAM and ChipRAM. It was all main memory, but the ChipRAM could be directly addressed by all the co-processor HW in the Amiga and the FastRAM could not.

It would be sort of interesting for Intel/AMD to do something like this where they have 16GB on the CPU and the OS has the knowledge to see it differently from external RAM.

Apple is going to have to do this for their real "Pro" side devices as getting 1TB on the Mx will be a non-starter. I would expect to see the M2 (or whatever) with a large amount of the basic on chip RAM and then an external RAM bus also.



Dunno, rumors claim 8 fast cores and 4 slow cores for the follow up. With some package tweaks I think they could double the ram to 32GB inside the package and leave the motherboard interface untouched.

I do wonder how many use cases actually need more then 32GB when you have a low latency NVMe flash with 5+ GB/sec of bandwidth and relatively low latencies. Especially with the special magic sauce that I've seen mentioned related to hardware acceleration for either compressing memory or maybe it was compressing swap.

In any case, I'm not expecting the top of the line for the next releases. Step 1 is low end (mba, mbp13", and mini). Step 2 is mid range, rumored to be a MBP 14.1" and 16 in first half of 2021". After that presumably the mac pro desktop and Imac's "within 2 years". Maybe step #3 will be a dual socket version of step #2 with of course double the ram.


So I am not one of those that screamed about the 16GB limit which was a huge number of comments here on HN. That being said I do know people in the creative industry that have Mac Pros with 1.5TB of RAM and use all of it and hit swap. For a higher end Pro laptop I would be happy at the 32GB range. However in something like an 8K display (which will be coming!) iMac I would like to see 128GB which will not fit on chip. They are going to have to go to a 2 level memory design at some point.


Maybe, or just move the memory chips from the CPU package to the motherboard.


Oh that is very much something they could do, but given the fact that they control the OS completely it would be very interesting to keep the on chip and off chip and enable the software to support understanding which is RAM is where and allow the application developers to tweak items. For example lets say you are editing a very large 8K stream and you tell the app, hey load this video into RAM. You could put the part that is in the current edit window in the on chip RAM and feed the rest of the video into that RAM as the editor moves forward from the 2nd level RAM. There are some interesting possibilities here.

Also from the ASIC yield view it allows for some interesting binning of chips. Let's say the M2 has 32MB on chip plus an off chip memory controller. They could use the ones that pass in the high end, then once that fail a memory test as 16MB on a laptop, etc. Part of keeping ASIC cost down is building the exact same thing and binning the chips in to devices based on yield.


Unless you are doing some crazy synthesis or simulation, 32GB is plenty.

Maybe editing 4K (or in the future, 8K) video might need more?

My brother does a lot of complex thermal airflow simulation, and his workstation has 192GB of RAM, but that is an extreme use case.


8GB MacBook Air can easily handle 4K.

And it can handle 8K for 1-2 streams and starts to lag at 4+.


The Amiga has never been multi cored. Has Vampire accelerators to replace the 68K chips and PowerPC upgrade cards.

Apple in making the M1 Chip is using some of the Amiga IP circa 1985 that speed up the system where the CPU and GPU etc. share memory. Amiga is shattered into different companies, but if they didn't go out of business they would have made a M1 type chip for their Amiga brand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: