I'm not completely sure, but I suspect Fedora will stick to the current baseline for quite some time.
But the baseline is quite minimal. It's biased towards efficient emulation of the instructions in portable C code. I'm not sure why anyone would target an enterprise distribution to that.
On the other hand, even RVA23 is quite poor at signed overflow checking. Like MIPS before it, RISC-V is a bet that we're going to write software in C-like languages for a long time.
> On the other hand, even RVA23 is quite poor at signed overflow checking.
On the other hand it avoids integer flags which is nice. I doubt it makes a measurable performance impact either way on modern OoO CPUs. There's going to be no data dependence on the extra instructions needed to calculate overflow except for the branch, which will be predicted not-taken, so the other instructions after it will basically always run speculatively in parallel with the overflow-checking instructions.
It's nice for a C simulator to avoid condition codes. It's not so nice if you want consistent overflow checks (e.g., for automatically overflowing from fixnums to bignums).
Even with XNOR (which isn't even part of RVA23, if I recall correctly), the sequence for doing an overflow check is quite messy. On AArch64 and x86-64, it's just the operation followed by a conditional jump: https://godbolt.org/z/968Eb1dh1
Non-flag based overflow checks are still pretty cheap. The overflow check is only 1 extra instruction for unsigned (both add and multiply), and 3/4 extra for signed overflow (see https://godbolt.org/z/nq1nb5Whr for details). It's also worth noting that in many cases, the overflow checks will be removable or simplify-able by the compiler entirely (e.g. if you're adding 1 or know the sign of one of the operands etc). As such, the extra couple instructions are likely worthwhile if it makes designing a wider core easier. Signed overflow instructions would be reasonable to add, but it's not like modern high performance cores are bottlenecked by scalar instructions that don't touch memory anyway.
I'm not sure if it's intentional. AWS doesn't have CPU features in their EC2 product documentation, either. It doesn't necessarily mean that they can disable CPU features for instances covered by existing customer contracts.
This is the sort of comment that makes people lose faith in HN.
There totally are cases where it's intentional, and no they are not discussed on the internet for obvious reasons. People in the industry will absolutely know what I'm on about.
I didn't intend to dismiss your experience. From the opposite (software) side, these things are hard to document, and unclear hardware requirement documentation result from the complexity and (perhaps) unresolved internal conflict.
Is there an actual U.S. RISC-V CPU that achieves competitive performance? I think the performance leaders are currently based in China.
There's a difference between announcement, offering IP for licensing (so you still have to make your own CPUs), shipping CPUs, and having those CPUs in systems that can actually boot something.
For instance, SiFive in the US, but last time I did check them, their RVA23 CPUs on their workstation boards did not have cache-line size vector instructions (only 128bits aka sse grade I think), RVA23 mandates the same "sweet spot" for a cache line size than on x86_64: 64bytes/512bits.
There should be plenty of existing programming models that can be reused because HPC used single-image multi-hop NUMA systems a lot before the Beowulf clusters took over.
Even today, I think very large enterprise systems (where a single kernel runs on a single system that spans multiple racks) are built like this, too.
I think it's an interesting model. Somehow, the maintenance needs to be funded, and that is an ongoing effort. Charging for security updates is not ideal, but I'm not sure what the alternative would be.
It seems like it would be cheaper and more effective to just keep in sync with GrapheneOS rather than maintaining a custom fork.
I understand that maintenance still isn't free in that case, but it seems like they went out of their way to make more maintenance work for themselves, and then they asked their customers to pay for it. As a potential customer, I would've rather it just come with standard GOS rather than paying yearly for a fork that probably isn't as secure.
Also if it's mandatory? I would also say it's desirable to prevent the situation in which users just choose to have zombie devices because security is more expensive, but making them free or making them mandatory paid would both work for that
How would they make it mandatory, though? The only way I can think of making it mandatory would be if the phone bricks itself when the subscription ends. Or if you just lease the phone and the lease includes updates.
It seems like the best approach would be to just include the cost of updates in the price of the phone, which I guess is what every other phone maker does.
It's not dynamic linking, despite excellent support for very late binding in historic Java versions. (Newer versions require specific launcher configurations to use certain platform features, which breaks late loading of classes that use those features.)
Bundling the JRE in the bundle typically results in something that is not redistributable with the default OpenJDK license: The Java ecosystem is heavily tilted towards the Apache license, but Hotspot is licensed under the GPL v2 only (no classpath exception). The Apache license and older GPL versions (before 3) are generally assumed to be incompatible.
Every modern openjdk build is licensed as GPLv2 + classpath exception. That exception includes hotspot, since it's part of the jvm. That exemption allows shipping the JVM with your app or linking to it. Otherwise a bunch of enterprise software couldn't exist.
> I am also deeply concerned about the “speculative” data center market. The “build it and they will come”strategy is a trap. If you are a hyperscaler, you will own your own data centers.
Is this actually true? I thought that hyperscalers keep datacenters at arm's length, using subsidiaries and outsourcing a lot of things.
Hyperscalers use various subsidiaries and shell companies to dodge taxes and keep the debt off their balance sheets so they can keep their AAA ratings, but ultimately the resulting datacenters are still entirely owned and operated in-house. Hyperscales do not and will not use any of these “DC as a service” startups, meaning those startups have to find customers elsewhere; the big question mark is whether enough of those customers exist.
But the baseline is quite minimal. It's biased towards efficient emulation of the instructions in portable C code. I'm not sure why anyone would target an enterprise distribution to that.
On the other hand, even RVA23 is quite poor at signed overflow checking. Like MIPS before it, RISC-V is a bet that we're going to write software in C-like languages for a long time.