Genuinely curious, do they not teach historic designs in EE course plans? To learn from and improve, but not reinvent something half-way or worse. Also, older designs are much easier to study completely compared to the super complex logic inside your current day x86.
Personally, I would call it hardware-assisted gc if we consider hardware acceleration to be things like GPUs, crypto accelerators, etc.
An undergrad computer architecture course would typically gloss over the history of CPUs and focus either on MIPS or x86 (or both). For example, I did two courses on x86, starting from 8086 (and 8088), through Pentium, and some bits of Itanium.
You're right that starting with a simple architecture makes things much easier. 8086 for example operated in 16-bit real mode (segmented memory), and so the memory layout was trivial compared to 32-bit protected mode in x86.
I haven't taken any graduate architecture courses yet, but my assumption that they would go into more detail on the development of CPUs through history.
Personally, I would call it hardware-assisted gc if we consider hardware acceleration to be things like GPUs, crypto accelerators, etc.