I've been hearing about this for a while and many of the entities holding the bag are publicly traded, given there's plenty of mercenary firms out there able and willing to make a buck off of a catastrophe is there any reason to believe this isn't already priced in?
Wading out my depth here, so forgive any stupidity following.
And there's a certain amount of sense to that, it has to get "under" the layer that viruses can typically get to, but I still think there should be another layer at which the OS is protected from misbehaving anti-virus software (which has been known to happen).
You're taking about how things are, the comment you're replying to is talking about how things could be. There's not a contradiction there.
Originally, x86 processors had 4 levels of hardware protection, from ring 0 up to ring 3 (if I remember right). The idea was indeed that non-OS drivers could operate at the intermediate levels. But no one used them and they're effectively abandoned now. (There's "level -1" now for hypervisors and maybe other stuff but that's besides the point.)
Whether those x86 were really suitable or not is not exactly important. The point is, it's possible to imagine a world where device drivers could have less than 100% permissions.
The problem I have with this is that anti-virus software has never felt like the most reliable, well-written, trustworthy software that's deserving of it's place in Ring 0.
I understand I'm yelling into the storm here, because anti-virus also requires that level of system access due to the nature of what it's trying to detect. But then again, does it only need Ring 0 access for the worst of the worst? Can it run 99% of the time in Ring 1, or user space, and only instantiate it's Ring 0 privileges for regular but infrequent scans or if it detects something else may be 'off'?
Default Ring 0? Earn it.
This turns into a "what's your threat level" discussion.
I would assume the Fed thinks it's not actually an ideal time to raise rates.
Real growth is in the dirt, they have a dual mandate, and the funds rate is an indiscriminate weapon. Given the circumstances the ideal intervention would be fiscal policy aimed at reducing the supply of money in less CoL-sensitive areas but the people in charge of _that_ are uh, somewhat more susceptible to political pressure.
> The problem with Flutter, and this extends even further with KMP is that the scope of your skills is limited to making mobile apps whereas with React Native you are learning and using skills that can apply to a complete stack.
I mean, Kotlin on the backend is quite good in my experience, I would say you can definitely run a full stack off of it.
The businesses who care about taking money from Europeans care. I worked at an American healthtech company and we weren’t GDPR-compliant because 1) we weren’t targeting Europeans, and 2) GDPR and HIPAA are incompatible so we picked the relevant one.
Since my server doesn’t do business in EU, I couldn’t care less about GDPR or other local laws, even the ones I think are good ideas.
American law doesn’t apply to someone running a server in Brussels. The converse is also true.
Which rules it out almost entirely for HIPAA covered entities. Quick example: right to be forgotten vs record retention laws. A European who receives healthcare in the US can’t demand that the provider delete their medical record afterward because HIPAA says they must retain it.
> Quick example: right to be forgotten vs record retention laws.
Record retention laws win, as explicitly stated in the GDPR.
Same reason a murderer can't (successfully) issue a right-to-be-forgotten request to the cops investigating them.
(There's also "processing is necessary for the purposes of the legitimate interests pursued by the controller" as another exception, which allows, for example, your bank to retain the fact that you owe them $100k on your house still, even if you don't want them to.)
Record retention laws are not the only exception. E.g. you can execute your Hausverbot right only if the person you refuse to serve cannot demand that you forget them. This position was already confirmed by German regulator at least once.
And they couldn't demand that the provider deletes it in EU either, because maintaining medical records is a legal requirement, which overrides the right to be forgotten.
But it does require you to document that requirement and make sure that the data isn't shared beyond that requirement without consent.
HIPAA and GDPR aren't conflicting, they're orthogonal and cover different things.
The right to be forgotten has an explicit exception for circumstances where there's a legal obligation on retention, although it does reference Union and Member State law and not other international entites.
https://gdpr-info.eu/art-17-gdpr/
A European who receives healthcare in the EU can’t demand that the provider delete their medical record if the provider has a legal allowed reason to keep the record.
This is a fundamental aspect of GDPR and part of the central message in the regulation. Companies and organizations are only allowed to keep personal information if they have a legal allowed reason to do so, and must honor requests for deletions unless they have a legal reason not to do so.
What is and what isn't a legit reason depend on circumstance. What companies generally object with GDPR is that generate revenue through personal advertisement is not an legit reason to keep personal data.
Reading about Erlang always feels like getting messages from an alternate dimension where we as an industry made much better choices in the 90s about how we write distributed software.
this. Erlang's concurrency support is one of those things you can't unsee. Going back to sequential-by-design languages (which is pretty much every other industrial quality language bar go[1]) just feels cumbersome:
C/C++/C#/Python/...: "You want concurrency? Sure. We have OS processes, and threads, and this cool new async doohickey. Pick whatever you fancy! Oh, but by the way: you can't use very many processes cos they're _really_ heavyweight. You can have lots more threads, but not too many, and beware corrupting the shared state. Asyc though, you can have _loads_ of things going on at once. Just, y'know, don't mix the colours up".
With Erlang/Elixir it's just:
"You want concurrency? Sure, here's Erlang processes. You can have millions of them. Oh, you need to communicate between them? Yep, no probs, messages and mailboxes. What's that? Error handling? Yep, got that covered too - meet the Supervisors"
--
[1] Counting Elixir as "Erlang" in this context given it also sits on the BEAM VM.
I guess the way I feel about using rust is kind of the opposite of how I feel about using Go. Go has such plain and straightforward semantics that it’s very easy to make simple, straightforward packages with it. It is actually even simpler than Python. In C++, it’s a pain to get there, but those simple anbstractions are still at least _possible_. In Rust, the nuances of the semantics are not only convoluted, but also very hard to encapsulate and abstract over. It’s not that the abstractions leak quite as much as it is that the nuances of the semantics are inextricably and necessarily _part of the package API_.
It could probably be argued that this is simply acknowledging reality, and that C++ lets you get away with murder if you want to. But maybe that’s what I want.
I think a good feature for C++ would be a “safe” (or “verified”?) block or method decorator that could enforce some set of restrictions on semantics, e.g. single mutable borrow. The problem is that those functionalities are part of the compiler APIs and not standardized.
The Rust compiler is also not standardized, but they can get away with it.
I'm not really sure what this proves, since there aren't really good reasons for spawning 1 million processes that do nothing except sleeping. A more convincing demonstration would be spawning 1 million state machines that each maintain their own state and process messages or otherwise do useful work. But examples of that on the BEAM have been around for years.
So, in interest of matching this code I wrote an example of spawning 1_000_000 processes that each wait for 3 seconds and then exit.
This is Elixir, but this is trivial to do on the BEAM and could easily be done in Erlang as well:
The default process limit is 262,000-ish for historical reasons but it is easy to override when running the script:
» time elixir --erl "+P 1000001" process_demo.exs 1000000
spawning 1000000 processes
________________________________________________________
Executed in 6.85 secs fish external
usr time 11.79 secs 60.00 micros 11.79 secs
sys time 15.81 secs 714.00 micros 15.81 secs
I tried to get dotnet set up on my mac to run the code in your example to provide a timing comparison, but it has been a few years since I wrote C# professionally and I wasn't able to quickly finish the required boilerplate set up to run it.
Ultimately, although imo the BEAM performs quite well here, I think these kind of showy-but-simple tests miss the advantages of what OTP provides: unparalleled introspection abilities in production on a running system. Unfortunately, it is more difficult to demonstrate the runtime tools in a small code example.
The argument regarding representativeness is fair. But I think it is just as important for the basics to be fast, as they represent a constant overhead most other code makes use of. There are edge cases where unconsumed results get optimized away and other issues that make the results impossible to interpret, and these must be accounted for, but there is also a risk of just reducing the discussion to "No true Scotsman" which is not helpful in pursuit of "how do we write fast concurrent code without unnecessary complexity".
I have adjusted the example to match yours and be more expensive on .NET - previous one was spawning 1 million tasks waiting for the same asynchronous timer captured by a closure, each with own state machine, but nonetheless as cheap as it gets - spawning an asynchronously yielding C# task still costs 96B[0] even if we count state machine box allocation (closer to 112B in this case iirc).
To match your snippet, this now spawns 1M tasks that wait the respective 1M asynchronous timers, approximately tripling the allocation traffic.
var count = int.Parse(args[0]);
Console.WriteLine($"spawning {count} tasks");
var tasks = Enumerable
.Range(0, count)
.Select(async _ => await Task.Delay(3_000));
await Task.WhenAll(tasks);
In order to run this, you only need an SDK from https://dot.net/download. You can also get it from homebrew with `brew install dotnet-sdk` but I do not recommend daily driving this type of installation as Homebrew using separate path sometimes conflicts with other tooling and breaks SDK packs discovery of .NET's build system should you install another SDK in a different location.
After that, the setup process is just
mkdir CSTasks && cd CSTasks
dotnet new console --aot
echo '{snippet above}' > Program.cs
dotnet publish -o .
time ./CSTasks
Note: The use of AOT here is to avoid it spamming files as the default publish mode is "separate file per assembly + host-provided runtime" which is not as nice to use (historical default). Otherwise, the impact on the code execution time is minimal. Keep in mind that upon doing the first AOT compilation, it will have to pull IL AOT compiler from nuget feed.
Once done, you can just nuke the `/usr/local/share/dotnet` folder if you don't wish to keep the SDK.
Either way, thank you for putting together your comment - Elixir does seem like a REPL-friendly language[1] in many ways similar to F#. It would be impolite for me to not give it a try as you are willing to do the same for .NET.
[1]: there exist dotnet fsi as well as dotnet-script which allow using F# and C# for shell files in a similar way, but I found the startup latency of the latter underwhelming even with the cached compilation it does. It's okay, but not sub-100ms an sub-20ms you get with properly compiled JIT and AOT executables.
Tasks are not processes, and that would be a wrong thing to do, and so would be "isolated heaps" given performance requirements faced by .NET - you do want to share memory through concurrent data structures (which e.g. channels are despite what go apologists say), and easily await them when you want to.
CSP, while is nice on paper, has the same issues as e.g. partitioning in Kafka, just at a much lower level where it becomes critical bottleneck - you can't trivially "fork" and "join" the flows of execution, which well-implemented async model enables.
It's not "what about x" but rather how you end up applying the concurrent model in practice, and C# tasks allow you to idiomatically mix in concurrency and/or parallelism in otherwise regular code (as you can see in the example).
I'm just clarifying on the parent comment that concurrency in .NET is not like in Java/C++/Python (even if the latter does share similarities, there are constraints of Python itself).
> and that would be a wrong thing to do, and so would be "isolated heaps" - you do want to share memory through concurrent data structures (which e.g. channels are despite what go apologists say), and easily await them when you want to.
It depends on the context. In some contexts absolutely not. If we share memory, and these tasks start modifying global data or taking locks and then crash, can those tasks be safely restarted, can we reason about the state of the whole node any longer?
> CSP, while is nice on paper
Not sure if Erlang's module is CSP or Actor's (it started as neither actually) but it's not just nice on paper. We have nodes with millions of concurrent processes running comfortably, I know they can crash or I can restart various subsets of them safely. That's no small thing and it's not just paper-theoretical.
RE: locks and concurrently modified data-structures
It comes down to the kind of lock being used. Scenarios which require strict data sharing handle them as they see fit - for recoverable states the lock can simply be released in a `finally` block. Synchronous/blocking `lock` statement does this automatically. All concurrent containers offered by standard library either do not throw or their exceptions indicate a wrong operation/failed precondition/etc. and can be recovered from (most exceptions in C# are, in general).
This does not preclude the use of channel/mailbox and other actor patterns (after all, .NET has Channel<T> and ConcurrentQueue<T> or if you would like to go from 0 to 100 - Akka and Orleans, and the language offers all the tools to write your own fast implementation should you want that).
Overall, I can see value of switching to Erlang if you are using a platform/language with much worse concurrency primitives, but with F# and C#, personally, Erlang and Elixir appear to be a sidegrade as .NET applications tend to scale really well with cores even when implemented sloppily.
What value does isolated heap offer for memory-safe languages?
Task exceptions can simply be handled via try-catch at the desired level. Millions of concurrently handled tasks is not that high of a number for .NET's threadpool. It's one thing among many that is "nothingburger" in .NET ecosystem which somehow ends up being sold as major advantage in other languages (you can see it with other features too - Nest.js as a "major improvement" for back-end, while it just looks like something we had 10 years ago, "structured concurrency" which is simple task interleaving, etc.).
It's a different, lower-level model, but it comes with the fact that you are not locked into particular (even if good) way of doing concurrency in Erlang.
Briefly, the tradeoff that Erlang and its independent process heaps model make is that garbage collection (and execution in general) occurs per-process. In practical terms, this means you have lots of little garbage collections and much fewer "large" (think "full OS process heap") collections.
This provides value in a few ways:
- conceptually: it is very simple. i.e., the garbage collection of one process is not logically tied to the garbage collection of another.
- practically: it lends itself well to low-latency operations, where the garbage collection of one process is able to happen concurrently to the the normal operation of another process.
Please note that I am not claiming this model is superior to any other. That is of course situational. I am just trying to be informative.
No global GC. Each erlang process does its own GC, and the GC only happens when the process runs out of space (ie. the heap and stack meet).
You can for example configure a process to have enough initial memory so as not to ever run into GC, this is especially useful if you have a process that does a specific task before terminating. Once terminated the entire process memory is reclaimed.
There is no free lunch in software - the tradeoff is binary serialization and/or data copying over simple function calls. The same goes for GC - for efficient GC, it has to come with quite involved state which has additional cost of spawning. At this point, might use bump allocator, or an arena. Either way, Gen0 (it's a generational GC) in .NET acts like one, STW pauses can be sub-millisecond and are pretty much non-issue, given that you don't even need to allocate that often compared to many other high-level languages.
> where we as an industry made much better choices in the 90s about how we write distributed software.
Erlang is a nice piece of software.
However, let us not dismiss the massive progress the world of distributed software has made since 1990s _not_ involving Erlang too.
Look at the scale at which we _reliably_ access video, audio, email, messaging, e-commerce/trading on distributed systems around the world ! At high reliability too ! Google, Facebook, Amazon, Netflix, Microsoft, NYSE/NASDAQ, ... -- Imagine the millions or even billions of computer systems working, cooperating in various private and public "clouds".
Apart from a few prominent systems here and there (e.g. erlang at WhatsApp), most of these systems _DONT_ use erlang. For various reasons Erlang has _not_ been chosen by thousands of software architects when they choose to build their next distributed system. Even though erlang lets us build a distributed system with lots of properties out-of-the box easily, let's talk about some failings of Erlang:
- Erlang is not statically typed language unlike Java, Rust, C/C++ etc. This means an erlang compiler cannot create code that will run as fast as the aforementioned languages. The compiler simply just does not have that much information available during compile time
- Not being statically typed also makes it a bit more difficult to refactor the codebase. Would you be able to refactor a 1 million line Rust code base more easily or a 100,000 line erlang code base (even if you have used Dialyzer). My money is on Rust.
- Not being statically typed also means that you cannot verify or mathematically prove properties about your system using various techniques as easily
TL;DR -- A small team can build a highly capable system on erlang quite easily in 2024. That small team would probbly take longer if they used Rust/C++/Java because those languages are more low level and take more time for development. But if you can throw some $$ on the project, in the long run a system built in Rust/C++/JVM can run more efficiently (and be maintained more easily) on a fewer machines using specialized code written in Rust/C++/Java etc. In other words it's not everyday you need to build a distributed system -- when you do, it makes sense to specialize and build it on a technology stack that may be a bit lower-level and statically typed.
This comment is already too long enough.
I like Erlang, it has some nice properties but when building distributed systems other technology stacks can also offer some other great advantages too.
> - Not being statically typed also makes it a bit more difficult to refactor
> the codebase. Would you be able to refactor a 1 million line Rust code base
> more easily or a 100,000 line erlang code base (even if you have used
> Dialyzer). My money is on Rust.
I have found that refactoring erlang is NOT like refactoring code in other languages, non trivial refactoring in rust is a LOT more complicated however I do understand the fuzzy feelings you get when type-safe code compiles correctly.
Most erlang refactoring that I see needing to be done is simply reapplying a different pattern to the gen_server or distributing load differently. I believe if refactoring is a "complex problem", the development team had not designed with OTP behaviors in mind. My view may be because I have limited experience in refactoring my erlang due to being a solo developer and my mind is stuck in OTP land, please correct me if you've experienced it differently, I feel that you're perhaps painting the picture a little unfairly there.
If programmers need type-safeness for BEAM and I believe Gleam Language supplies the security blanket that other languages provide. From my limited experience it does NOT provide any additional "speed" (I expect there are not many compiler optimisations that end up down on the BEAM level) however it does give you that level of confidence that you're not going to be passing garbage data through your functions.
I haven't taken anything you have said as a personal (or even against erlang), thank you for the discussion points.
Other than not moving away from Javascript, don't React Server Components mostly do what you describe here? A single stack, with both server side logic and client side interactivity?
Blazor Server and WASM both have issues that make them non-starters for me, namely latency quickly gets out of hand for the former and the latter is far too heavy. The (unfortunate?) reality is that Javascript is unavoidable if you want good bundle sizes and client-side interactivity.
I find HTMX is a better alternative if I'm trying to write as little JS as possible, I find it smooths over the latency issue better and just lets me write normal web app code that feels like a SPA to use.
What work are people doing on an MBA or MBP 13" that needs more than 16GB of physical RAM? Last I saw the swap is extremely performant so you'd really need to be pushing that limit.
You don't want to wear out the SSD with constant swapping though. Especially because it can't be replaced so it'll have to last the entire life of the computer.
SSD lifetime concerns with the M1 are largely FUD (and based on initial reports with bad stats). The SSDs should last for the lifetime of the device even with heavy write usage.
But what's the lifetime of the device? That tends to differ a LOT.
For example I still use my Mac Mini 2011 today. After I stopped using it as desktop it became an ESXi server (because ESXi automatically permits Mac guests if you run on Mac hardware). I've already burned through two good SSDs (Crucial MX).
Non-replaceable SSDs in macbooks predate the M1. The interesting claim (which now appears either to have been entirely erroneous, or based on a bug that has now been fixed) was that the M1 Macs hit the SSDs harder than their predecessors.
If you are just talking about general SSD lifetime issues, then we already have over 5 years of user experience to go on (https://uk.pcmag.com/mac-laptops/86074/15-inch-macbook-pro-u...). And I believe that SSD reliability and lifetime has increased since 2016.
Yesterday Firefox was reporting 66GB RAM usage on my MBP with 16MB RAM. I also have a Linux VM that must be permanently running
Firefox became unresponsive until it decided to shrink its caches, which it did over about 5 minutes down to 10GB usage.
It clearly didn't need 66GB, most of it swap space. Usually it's not that much, and is more likely reach 20-30GB.
Firefox on Mac seems to grow and grow and grow usage, then when it becomes unresponsive it shrinks usage until it's responsive again. Several times a minute when scrolling sometimes, a long pause. It shrinks down to 10GB or less consistently. (My guess is a bug in the internal heuristics governing how much it decides to cache, being confused by MacOS memory compression and/or the fast swap.)
Perhaps I'd have a better browsing experience with 64GB RAM than 16GB. And less SSD wear. Perhaps less battery consumption, depending on whether SSD I/O for swap consumes more or less than powering extra DRAM.
Overpriced? Are we looking at the same machines? I was under the impression that the M1 Air obliterated every x86 laptop at the same form factor and price point.