I disagree; consistency is the most important part of this. With Python, you know where you stand - performance comes from elsewhere. With Julia it might be internal, a C/Fortran library, or apparently other things now too.
I never called it a strength; I simply don't care how fast Python is because I don't need to. Reasoning about what is going on under the hood with Python is just easier than Julia - if it needs to be fast, it's a fast external library being used.
This is really a minor issue stemming from the relative maturity of the languages - if Julia becomes more established I would hope usage of external libraries which don't offer a performance advantage (ie everything besides C and Fortran) eventually gets replaced with native packages to preserve the sanity of the users.
One problem relying on fast libraries causes is it makes doing compiler tricks like automatic differentiation (AD) basically impossible. Also, it restricts the types of APIs that make sense. A simple example of this is to compare Scikit learn to Julia. In Scikit learn, most clustering methods don't allow the user to specify a distance function because doing so would require running python inside a tight C loop, tanking performance. In MLJ, on the other hand, basically anything that requires a distance function will allow you to pass one in rather than assuming euclidean distance. This is possible, because a distance function written in Julia can still be fast, so it can be used without slowing down the whole program for people who only want euclidean distances.
So then that 'weakness' in python is not an issue to you personally, because you fluently drop down to a fast language anytime you need to, with no particular loss in productivity?
Keep in mind though, that this could be a hurdle to those who are less multilingual. So even though it's not a weakness to everyone, it is to many.
Lustre is not aligned at all with your requirements, so forget that one.
Ceph is much more complex than Gluster, but also more capable.
Honestly unless you are dealing with hundreds of TB of storage (and therefore need multiple servers anyway), I expect the complexity any distributed file-system adds is going to be detrimental to uptime and stability more often than it provides extra resilience. Use a single box with ZFS if you can, and add Gluster on top only if it can't be avoided.
And even if you need hundreds of TB of storage, in many cases it might be better to just shard over normal storage and use your backups in case of failures. Current disks are big so you don't need too many for eg 200 TB.
That's true, but then it is surely better to cut out the middle-man and just not use fossil fuels for static generation in the first place.
The energy needs which are hard to meet with renewables (aviation, other large-scale transport) are the same places where CCS is non-viable due to the efficiency hit.
The best we can do is decarbonise as quickly as possible, and live with the fallout of our failure to act this far - unless a significant use for captured CO2 is identified, atmospheric capture technology will always struggle with commercial viability.
The best we can do is to do everything we can.
It might also be interesting to start burning biomass and capturing the CO2, which would be net negative.
Sandboxing is fine, but it sounds like Snaps/Flatpaks don't actually do it because that would be too hard - so what's the point?
I get that some packages do actually have sandboxing, but unless it is mandatory and enforced I feel like I'm better off avoiding the ecosystem entirely and dealing with app isolation myself, using containers or VMs.
Snap/Flatpak are not doing it because that's not the layer which "does it". They provide the framework which allows since sandboxing today and will provide better sandboxing tomorrow. It's up to the app distributors to support it or not. We won't get full support immediately either.
It may be too hard today. But that's less "Flatpak is a security nightmare" and more "we're not using the features we have very well yet". I feel like some people expected 100% targeted profile for each app or will declare sandboxing a failure. This stuff will take years.
On average, I've found Snaps to be better sandboxed. But there are plenty of things to dislike about Snap e.g. not respecting XDG base directory spec, persistent daemon running as root, requiring sudo, unable to control when it decides to autoupdate, coarse grain "connection" system ...and more. A lot of obvious design mistakes that don't get fixed for one reason or another.
Well, they are also marketed as a way of isolating dependencies, which they do actually manage to some degree. So they could be weak security-wise without being completely pointless?
UV and Superdome are custom hardware for huge NUMA boxes,so not a great comparison. ScaleMP is definitely valid though - a real shame it is stuck behind a license fee, would be interesting to experiment with.
The repeated references to Breton Woods/petrodollar reminds me of when Nigel Farage kept banging on about Habeas Corpus during the Brexit debates. To put it another way, this sounds like the lunatic ramblings of a homeless man hanging around a bus station.
What happens if a month from now, some major governments decide "the environmental impacts of bitcoin have become ludicrous, have yourselves an 80% transaction tax to cover the CO2 cost"?