How do you propose to scale trust on first use? SSH basically says the trusting of a key is "out of scope" for them and makes it your problem. As in: You can put on a piece of paper, tell it over the phone, whatever, but SSH isn't going to solve it for you. How is some user landing on a HTTPS site going to determine the key used is actually trustworthy?
There have actually been attempts at solving this with some thing like DANE [1]. For a brief period Chrome had DANE support but it was removed due to being too complicated and being in (security) critical components. Besides, since DNSSEC has some cracks in it (you local resolver probably doesn't check it) you can have a discussion about how secure DANE is.
What "TOFU directory" ? The whole point of TOFU is that you're just going to accept that anybody's first claim of who they are is correct. This is going to often work pretty well, after all it's how a lot of our social relationships work. I was introduced to a woman as Nodis, so, I called her Nodis, everyone else I know calls her Nodis, her boyfriend calls her Nodis. But it turns out her employer and the government do not call her that because their paperwork has a legal name which she does not like - like many humans probably her legal name was chosen by her parents not by her.
Now, what if she'd insisted her name is Princess Charlotte. I mean, sure, OK, she's Princess Charlotte? But wait, my country has a Princess Charlotte, who is a little girl with some chance of becoming Queen one day (if her elder brother died or refused to be King). So if I just trusted that Nodis is Princess Charlotte because she said so, is there a problem?
SSH has its own certificate authority system to validate users and servers. This is because trust-on-first-use is not scalable unless you just ignore the risk (at which point you may as well not do encryption at all), so host keys are signed.
There is quite literally nothing that prevents you from putting a self-signed server certificate. Your browser will even ask you to trust and store the certificate like your client does on the screen that shows the fingerprint.
Good luck getting everyone else to trust your fingerprint, though.
> I saw from the SSD was around 800 MB/s (which doesn’t really make sense as that should give execution speeds at 40+ seconds, but computers are magical so who knows what is going on).
If anyone knows what’s actually going on, please do tell.
No, it needs to read the entire executable in order to be correct, it can't skip anything. Therefore the time for the IO must be a lower bound, predictive branching can't help that.
I disagree. Code review has a social purpose as well as a technical one. It reinforces a shared understanding of the code and requires one person to assure another that the code is ready for review. It develops consensus about design decisions and agreement about what the code is for. With only one person, this is impossible. “Code goes brrr” is a neutral property. It can just as easily take you to the wrong destination as the right one.
anyone who is doing serious enough engineering that they have the rule of "one human writes, one human reviews" wants two humans to actually put careful thought in to a thing, and only one of them is deeply incentivised to just commit the code.
your suggestion means less review and worse incentives.
More eyes are better, but more importantly code review is also about knowledge dissemination. If only the original author and the LLM saw the code you have a bus factor of 1. If another person reviews the bus factor is closer to 2.
The fundamental problem here is shared memory / shared ownership.
If you assign exclusive ownership of all accounting data to a single thread and use CSP to communicate transfers, all of these made up problems go away.
This is equivalent of using a single global lock (and STM is semantically equivalent, just theoretically more scalable). It obviously works, but greatly limits scalability by serializing all operations.
Also in practice the CSP node that is providing access control is effectively implementing shared memory (in an extremely inefficient way).
The fundamental problem is not shared memory, it is concurrent access control.
Your reasoning seems counter intuitive as back in 2012 Facebook rewrote their HTML5 based app to native iOS code, optimized for performance, and knowingly took the feature parity hit.
Reminds me of this 2013 story where they moved to native Java for Android and hit limits with e.g. too many methods and instead of refactoring or just not bloating their app they hacked some internals of the Davlik VM while it's running during app install: https://engineering.fb.com/2013/03/04/android/under-the-hood...
For some application certainly. Instant messaging of course has many strong point in term of what is to be dealt with. Short messages, photos, quick visios.
But to edit large document, visualize any large corpus with side by side comparison, unless we plug our mobile on a large screen, a keyboard and some arrow pointer handler, there is no real sane equivalent to work with on mobile.
Yeah, but the majority of people who would've been daily desktop or at least laptop users some 10 to 15 years ago now make do with a phone. Most people do not need to visualize any large corpus or edit large documents. Similarly, there's a great deal of phone users who's first interaction with computers was via a smartphone.
I'm not sure a 10x increase in typing speed makes you a 10x developer.