That said, RISC-V is good for embedded applications where raw performance isn't a factor. I think no other markets are yet accessible to RISC-V chips until their performance massively improves.
There is a chip out there that contains both an ARM and a RISC-V core, the RP2350. It's reasonable to assume that the ARM part and RISC-V part are manufactured in the same process. There are some benchmarks pitting the two against each other on e.g. this page: https://forums.raspberrypi.com/viewtopic.php?t=375268
For a generic logic workload like Fibo(24), the performance is essentially the same (quote from above page):
Average Runtime = 0.020015 Pico 2
Average Runtime = 0.019015 Pico 2 RiscV
Note that neither core on the RP2350 comes with advanced features like SIMD.
I wager that statement would be turned on it's head if we restricted the comparison to chips of similar transistor density. Fast ARM chips do exist, as ARMv8 designs fabbed on 5nm TSMC with noncompliant SIMD implementations. If there were RISC-V chips in the same vein as Ampere or Nvidia's Grace CPU, I don't see any reason why they couldn't be more competitive than an ARM chip that's forced to adhere directly to the ISA spec.
RISC-V hedged it's bet by creating different design specs for smaller edge applications and larger multicore configurations. Right now ARM is going through a rift where the last vestiges of ARMv6/7 support is holding out for the few remaining 32-bit customers. But all the progress is happening on the bloating ARMv9 spec that benefits nobody but Apple and Nvidia. For all of ARM's success in the embedded world, it would seem like they're at an impasse between low-power and high-power solutions. RISC-V could do both better at the same time.
Yes? Because nobody has released a RISC-V MPU comparable to what you perceive as "moden" arm64 MPUs.
RISC-V is simply a ISA and not a core. The ISA affects some of the core architecture but the rest is also implementor specific. High-end cores will take time to reach market. Companies with big guns like Qualcomm can most likely pump out if they wanted to, and will most likely be doing so in the future since they are pumping over $1 billion into the effort.
How you design a core is very different based on if you're targeting ultra-low-power tiny microcontroller designs vs high performance and high power laptop/desktop-tier designs.
And it's not been proven that RISC-V is a good match for the second group (yet).
Remember it's sometimes very non-obvious what quirks of an ISA might be difficult until you actually try to implement it - one of the reasons ARM had a pretty much "clean sheet" rewrite in ARMv8 is things like the condition codes turned out to be difficult to manage in wide superscalar designs with speculative execution - which is exactly the sort of thing required to meet the "laptop-tier" design performance requirements.
It may be they've avoided all those pitfalls, but we don't really know until it's been done.
We're not quite there yet. A bunch of mission critical stuff like SIMD were only added in the last 2-3 years. As it takes 4-5 years to design/ship high-performance chips, we still have a ways to go.
Qualcomm pitched making a bunch of changes to RISC-V that would move it closer to ARM64 and make porting easier, so I think it's an understatement to say that they are considering the idea. If ISA doesn't matter, why pay tons of money for the ISA?
There were two competing SIMD specs, and personally I'm glad that RVV won out over PSIMD. It's an easier programmer's view and fewer instructions to implement.
RVV was not going to lose this one. RVV's roots run deeper than RISC-V's.
RISC-V was created because UC Berkeley's vector processor needed a scalar ISA, and the incumbents were not suitable. Then, it uncovered pre-existing interest in open ISAs, as companies started showing up with the desire for a frozen spec.
Legend is that MIPS quoted UC Berkeley some silly amount. We can thank them for RISC-V. Ironically, they ended up embracing RISC-V as well.
I think RISC-V chips in the wild do not do things like pipelining, out-of-order, register renaming, multiple int/float/logic units, speculation, branch predictors, smart caching.
I think all existing RISC-V chips in the wild right now are just simplistic in-order processors.
Back in 2016, BOOMv1 (Berkley Out-of-Order Machine) had pipelining, register renaming, 3-wide dispatch, branch predictor, caches, etc. A quick google seems to indicate that it was started in 2011 and had taped out 11 times by 2016 (with actual production apparently being done on IBM 45nm).
Almost all in-order processors will do pipelining, so that's there. Many are even multi-issue. Andes has an out of order core [1] and so does SiFive (though I don't know of many actual chips using these.
You're confusing the ISA with the chip. Current RISC-V chips are slower than high performance ARM ones, but that's because you don't start by designing high performance chips! You start with small embedded cores and work your way up.
Exactly the same thing happened with ARM. It started in embedded, then phones, and finally laptops and servers. ARM was never slow, they just hadn't worked up to highly complex high performance designs yet.
you don't start by designing high performance chips! You start with small embedded cores and work your way up.
I disagree. For example, the first PowerPC was pretty fast and went into flagship products immediately. Itanium also went directly for the high end market (it failed for unrelated reasons). RISC-V would be much better off if some beastly chips like Rivos were released early on.
The high end requires specifications that were not available until RVA22 and Vector 1.0 were ratified. First chips implementing these are starting to show up as seen in e.g. MILK-V Jupiter, which is one of the newest development boards in the market.
With the ISA developed in the open, the base specs microcontrollers can target would naturally tend to be be ratified first, and thus microcontrollers would show up first. RVA22+V were ratified November 2021.
With the ISA developed inside working groups involving several parties, some slowness would be unavoidable, as they all need to agree on how to move forward. Thus the years of gap until RVA22+V from the time the privileged and unprivileged specs were ratified (2019).
RVA23 has just been ratified. This spec is on par with x86-64 and ARMv9, feature-wise. Yet hardware using it will of course in turn take years to appear as well.
Isn't the problem the lack of advanced features for executing the current ISA with speed? I thought RISC-V chips seen in the wild do not pipelining, out-of-order, register renaming, multiple int/float/logic units, speculation, branch predictors, multi-tier caching, etc. The lack of speed isn't really related to a few missing vector instructions.
The question is more of will your customers agree to go along with this major architectural shift that sets you back on price-performance-power curves by at least five years and moves you out of the mainstream of board support packages, drivers, and everything else software-wise for phones.
Also we should not pretend that ARM is just going to sit there waiting for RISC-V to catch up.
> The question is more of will your customers agree to go along with this major architectural shift that sets you back on price-performance-power curves by at least five years and moves you out of the mainstream of board support packages, drivers, and everything else software-wise for phones.
Embedded is moving to RISC-V where they have low performance needs.
One example is the Espressif line of CPUs - which have shipped over 1B units. They have moved most of their offerings to RISC-V over the last few years and they are very well supported by dev tools: https://www.espressif.com/en/products/socs
It certainly is easy to casually spread fear and doubt.
But it is really far-fetched to think that the people at Tenstorrent, who have successfully delivered very high performance microarchitectures in other companies before, are lying about Ascalon, and that LG is helping them do that.
It would even be more far fetched to claim that Ventana Veyron V2, SiFive P870, Akeana 5000-series, all of them available high performance IP, are lying about performance.
Well you need several years to catch up - and those doing arm are not standing still. Same problem big software rewrites have, some are successful but it takes a large investment while everyone is still using the old stuff that is better for now.
Just something I as a random person been thinking, how likely is next version of Windows _not_ going to be something Linux-based with WINE+Bochs preinstalled?
Windows branding is now forever tied with x86/x64 Win32 legacy compatibility, meanwhile WSL had captured back a lot of webdevs from Mac. Google continues to push Chrome, but Electron continues to grow side by side. Lots of stuff happening with AI on Linux too, with both Windows and Mac remaining to be consumer deployment targets. Phone CPUs are fast enough to run some games on WINE+Bochs.
At this point, would it not make sense for MS to make its own ChromeOS and bolt-on an "LSW"?
Whether Microsoft has windows running on an architecture is a very different level from whether it’s feasible to use it as a daily driver on windows. The ecosystem is what matters for most people.
Windows for arm hails back to 2011. They’re only just now getting native arm ports for several major packages. That’s ~13 years for a well established architecture that’s used much more universally than RISC-V. They don’t even have arm ports for lots of software that has arm ports on macOS.
RISC-V will take an aeon longer to get a respectable amount of the windows ecosystem ported over.
Arm on windows may date to 2011, but it was mostly a side project with 1-2 maintainers. With sufficient investment, it shouldn’t take 13 years to build up RISC-V support.
Like everything else, it doesn’t matter much. Windows ran on Itanium, Alpha, and as pointed out ARM for over a decade.
Without the ISVs, it’s a flop for consumers.
MS has had an abysmal time getting them to join in on ARM, only starting to have a little success now. Saying “Ha ha, just kidding, it’s RISC-V now” would be a disaster. That’s the kind of rug pull that helped kill Windows Mobile.
Emulators aren’t good enough. They’re a stop gap. Unless the new chip is so much better than the old it’s faster with emulation then the old one was native no one will accept it long. Apple’s been there, but that’s not where MS sits today.
And if your emulator is too good, what stops ISVs from saying “you did it for us, we don’t have to care”? So once again they don’t have to do it at all and you have no native software.
MS can’t drop their ARM push unless they want to drop all non-x86 initiatives for a long time.
>And if your emulator is too good, what stops ISVs from saying “you did it for us, we don’t have to care”? So once again they don’t have to do it at all and you have no native software.
x86 emulation enables adoption.
Adoption means having an user base.
Having an user base means developers will consider making the platform a target.
>Saying “Ha ha, just kidding, it’s RISC-V now” would be a disaster.
Would it now? If anything, offering RISC-V support as well would further reinforce the idea that Windows is ISA-independent, and not tied to x86 anymore.
Switching CPU architecture is not about changing a compilation option, it's about eliminating centuries old assembly codes, binaries, and third party components and re-engineering everything to be self hosted on-prem at the company. Commercial software companies are reckless and stupid lazy and unbelievably inept, so lots of them won't be able to do this, especially for the second time.
In case this translation was needed at all. The point is the point is not a "-riscv" compilation option.
You sure? Microsoft dropped Alpha, MIPS, and PowerPC by the time Windows 2000 rolled around. Beyond that point, only the Xbox 360 and Itanium versions had anything different to the usual X86/64 offering.
Fair, though I don’t think translation is a good long term strategy. You need native apps otherwise you’re always dealing with a ~20-30% disadvantage.
The competition isn’t sitting still either and QC already hit this with Intel stealing their thunder with Lunar Lake. They’re efficient enough that the difference in efficiency is far overshadowed by their compatibility story.
Ecosystem support will always go to the incumbent and this would place RISC-V third behind x86 and ARM. macOS did this right by saying there’s only one true way forward. It forces adoption.
I think you just ignored the rest of my comment though which specifically addresses why I don’t think just relying on translation is an effective strategy. Users aren’t going to switch to a platform that has lower compatibility when the incumbent has almost as good efficiency and performance.
>when the incumbent has almost as good efficiency and performance.
The incumbent is the only two companies -Intel and AMD- that can make x86 hardware.
The alternative is the rest of the industry.
Thus having a migration path should be plenty on its own.
Intel and AMD can both join by making RISC-V or ARM hardware themselves. My take is that they will too, eventually, come around. Or they'll just disappear from relevance.
The incumbent is not just x86 but now ARM as well.
You have to think in network effects. You mention “the rest of the industry” yet ignore that it’s mostly arm , which would make arm the incumbent.
x86 is the king for windows. But ARM has massive inroads with mobile, and now desktop with macOS, and servers with Amazon/Nvidia etc
There’s a lot better incentive to support ARM than RISC-V for software developers. It isn’t one or the other , but it is a question of resources.
Intel and AMD seem fine turning x86 around when threatened as can be seen by Lunar Lake and Stryx Point. Both have been good enough to steal QC’s thunder. You don’t think ARM manufacturers will do the same to RISC-V?
TBH most of your arguments for RISC-V adoption seem to start from the position that it’s inevitable AND that competing platforms won’t also improve.
I think it's already a great thing for RISC-V, imagine things somehow go well for Qualcomm, do you really think they wouldn't prepare a plan B given ARM tried to get them out of the market?
I don’t think they have a plan B. Architectures take half a decade of work. Porting from risc-v to arm is not a matter of a backup plan, it’s that of a very costly pivot.
This time last year they were all over the RISC-V mailing lists, trying to convince everyone to drop the "C" extension from RVA23 because (basically confirmed by their employees) it was not easy to retrofit mildly variable length RISC-V instructions (2 bytes and 4 bytes) to the Aarch64 core they acquired from Nuvia.
At the same time, Qualcomm proposed a new RISC-V extension that was pretty much ARMv8-lite.
The proposed extension was actually not bad, and could very reasonably be adopted.
Dropping "C" overnight and thus making all existing Linux software incompatible is completely out of the question. RISC-V will eventually need a deprecation policy and procedure -- and the "C" extension could potentially be replaced by something else -- but you wouldn't find anyone who thinks the deprecated-but-supported period should be less than 10 years.
So they'd have to support both "C" and its replacement anyway.
Qualcomm tried to make a case that decoding two instruction widths is too hard to do in a very wide (e.g. 8) instruction decoder. Everyone else working on designs in that space ... SiFive, Rivos, Ventana, Tenstorrent ... said "nah, it didn't cause us any problems". Qualcomm jumped on a "we're listening, tell us more" from Rivos as being support for dropping "C" .. and were very firmly corrected on that.
> Dropping "C" overnight and thus making all existing Linux software incompatible is completely out of the question.
For general purpose Linux, I agree. But if someone makes Android devices and maintains that for RISC-V… that's basically a closed, malleable ecosystem where you can just say "f it, set this compiler option everywhere".
But also, yes, another commenter pointed out C brings some power savings, which you'd presumably want on your Android device…
Qualcomm can do whatever they want with CPUs for Android. I don't care. They only have to convince Google.
But what they wanted to do was strip the "C" extension out of the RVA23 profile, which is (will be) used for Linux too, as a compatible successor to RVA22 and RVA20, both of which include the "C" extension.
If Qualcomm wants to sponsor a different, new, profile series ... RVQ23, say ... for Android then I don't have a problem with that. Or they can just go ahead and do it themselves, without RISC-V International involvement.
Qualcomm can do whatever they want with CPUs for Android. I don't care. They only have to convince Google.
But what they wanted to do was strip the "C" extension out of the RVA23 profile, which is (will be) used for Linux too, as a compatible successor to RVA22 and RVA20, both of which include the "C" extension.
If Qualcomm wants to sponsor a different, new, profile series ... RVQ23, say ... for Android then I don't have a problem with that. Or they can just go ahead and do it themselves, without RISC-V International involvement.
This is officially too much quibbling, even if we settled philosophical questions like "Is Android Linux?" Then "If not, would dropping C make RISC nonviable", there isn't actually an Android version that'll do RISC anywhere near on the horizon. Support _reversed_ for it, got pulled 5 months ago
If you trust PR (I don't, and I worked on Android for 7 years until a year ago) - this is a nitpick 5 levels down -- regardless of how you weigh it, there is no Android RISC-V
There is no Android RISC-V. There isn't an Android available to run on RISC-V chips. There is no code to run on RISC-V in the Android source tree, it was all recently actively removed.[1]
Despite your personal feelings about their motivation, these sites were factually correctly relaying what happened to the code, and they went out of their way to say exactly what Google said and respected Google's claim that they remain committed, with 0 qualms.
I find it extremely discomfiting that you are so focused on how the news makes you feel that you're casting aspersions on the people you heard the news from, and ignoring what I'm saying on a completely different matter, because you're keyword matching
I'm even more discomfited that you're being this obstinate about the completely off-topic need for us all to respect Google's strong off-topic statement of support[2] over the fact they removed all the code for it
[1] "Since these patches remove RISC-V kernel support, RISC-V kernel build support, and RISC-V emulator support, any companies looking to compile a RISC-V build of Android right now would need to create and maintain their own fork of Linux with the requisite ACK and RISC-V patches."
[2] "Android will continue to support RISC-V. Due to the rapid rate of iteration, we are not ready to provide a single supported image for all vendors. This particular series of patches removes RISC-V support from the Android Generic Kernel Image (GKI)."
I don't know why people keep replying as if I'm saying Android isn't going to do RISC-V.
I especially don't understand offering code that predates the removal from tree and hasn't been touched since. Or, a mailing list, where we click on the second link and see a Google employee saying on October 10th "there isn't an Android riscv64 ABI yet either, so it would be hard to have [verify Android runs properly on RISC-V] before an ABI :-)"
That's straight from the horses mouth. There's no ABI for RISC-V. Unless you've discovered something truly novel that you left out, you're not compiling C that'll run on RISC-V if it makes any system calls.
I assume there's some psychology thing going on where my 110% correct claim that it doesn't run on RISC-V today is transmutated to "lol risc-v doesn't matter and Android has 0 plans"
I thoroughly believe Android will fully support RISC-V sooner rather than later.
It's certainly more than just disabling a build type - it's actually removing a decent bit of configuration options and even TODO comments. Then again, it's not actually removing anything particularly significant, and even has a comment of "BTW, this has nothing to do with kernel build, but only related to CC rules. Do we still want to delete this?". Presumably easy to revert later, and might even just be a revert itself.
They can roll their sleeves up and do the small amount of work that they tried to persuade everyone else was not necessary. And I'm sure they will have done so.
It's not that hard to design a wide decoder that can decode mixed 2-byte and 4-byte instructions from a buffer of 32 or 64 bytes in a clock cycle. I've come up with the basic schema for it and written about it here and on Reddit a number of times. Yeah, it's a little harder than for pure fixed-width Arm64, but it is massively massively easier than for amd64.
Not that anyone is going that wide at the moment. SiFive's P870 fetched 36 bytes/cycle from L1 icache, but decodes a maximum of 6 instructions from it. Ventana's Veyron v2 decodes 16 bytes per clock cycle into 4-8 instructions (average about 6 on random code).
> Yeah, it's a little harder than for pure fixed-width Arm64, but it is massively massively easier than for amd64.
For those who haven't read the details of the RISC-V ISA: the first two bits of every instruction tell the decoder whether it's a 16-bit or a 32-bit instruction. It's always in that same fixed place, there's no need to look at any other bit in the instruction. Decoding the length of a x86-64 instruction is much more complicated.
So that there are 48k combinations available for 2-byte instructions and 1 billion for 4-byte (or longer) instructions. Using just 1 bit to choose would mean 32k 2-byte instructions and 2 billion 4-byte instructions.
Note that ARMv7 uses a similar scheme with two instruction lengths, but using The first 4 bits from each 2-byte parcel to determine the instruction length. It's quite complex, but the end result is 7/8 (56k) 2-byte instructions are possible and 1/8 (512 million) 4-byte instructions.
IBM 360 in 1964 thru Z-System today also uses a 2-bit scheme to choose between 2-byte instructions with 00 meaning 2-bytes (16k instructions available), 01 or 10 meaning 4-bytes (2 billion instructions available), and 11 meaning 6-bytes (64 terra instructions available).
To increase the number of 16-bit instructions. Of the four possible combinations of these two bits, one indicates a 32-bit or longer instruction, while the other three are used for 16-bit instructions.
> Do they plan to support other instruction lengths in the future?
They do. Of the eight possible combinations for the next three bits after these two, one of them indicates that the instruction is longer than 32 bits. But processors which do not know any instruction longer than 32 bits do not need to care about that; these longer instructions can be naturally treated as if they were an unknown 32-bit instruction.
Qualcomm has been working on RISC-V for a while, at outwardly-small scale. It's probably intended as a long-term alternative rather than a ready-to-go plan B. From a year ago: "The most exciting part for us at Qualcomm Technologies is the ability to start with an open instruction set. We have the internal capabilities to create our own cores — we have a best-in-class custom central processing unit (CPU) team, and with RISC-V, we can develop, customize and scale easily." -- https://www.qualcomm.com/news/onq/2023/09/what-is-risc-v-and..., more: https://duckduckgo.com/?q=qualcomm+risc-v&t=fpas&ia=web
Qualcomm pitched a Znew extension for RISC-V that basically removes compressed (16-bit) instructions and adds more ARM64-like stuff. It felt very much like trying to make an easier plan B for if/when they need/want to transition from ARM to RISC-V.
It’s a bit much to say their primary product that they’ve done for decades is a plan B. By definition it cannot be a plan B if it’s executed first and is successful.
I think a lot of RISC-V advocates are perhaps a little too over eager in their perception of the landscape.
No kidding; and while RISC-V is a massive improvement, I hate to be the wet blanket, but RISC-V will not change signed boot, bootloader restrictions, messy or closed source drivers, carrier requirements, DRM implementation, or other painful day to day paper cuts. Open source architecture != Open source software and certainly != Open source hardware; no matter what the YouTubers think.