Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

N3E still has a +9% logic transistor density increase on N3 despite a relaxation to design rules, for reasons such as introduction of FinFlex.[1] Critically though, SRAM cell sizes remain the same as N5 (reversing the ~5% reduction in N3), and it looks like the situation with SRAM cell sizes won't be improving soon.[2][3] It appears more likely that designers particularly for AI chips will just stick with N5 as their designs are increasingly constrained by SRAM.

[1] https://semiwiki.com/semiconductor-manufacturers/tsmc/322688...

[2] https://semiengineering.com/sram-scaling-issues-and-what-com...

[3] https://semiengineering.com/sram-in-ai-the-future-of-memory/



SRAM has really stalled. I don’t think 5nm was much better than 7nm. On ever smaller nodes, sram will be taking up a larger and larger percent of the entire chip. But the cost is much higher on the smaller nodes even if the performance is not better.

I can see why AMD started putting the SRAM on top.


It wasn't immediately clear to me why SRAM wouldn't scale like logic. This[1] article and this[2] paper sheds some light.

From what I can gather the key aspects are that decreased feature sizes lead to more variability between transistors, but also to less margin between on-state and off-state. Thus a kind of double-whammy. In logic circuits you're constantly overwriting with new values regardless of what was already there, so they're not as sensitive to this, while the entire point of a memory circuit is to reliably keep values around.

Alternate transistor designs such as FinFET, Gate-all-around and such can provide mitigation of some of this, say by reducing transistor-to-transistor variability by a factor, but can't get around root issue.

[1]: https://semiengineering.com/sram-scaling-issues-and-what-com...

[2]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9416021/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: