Hacker Newsnew | past | comments | ask | show | jobs | submit | philippta's commentslogin

> They use Claude to skip the typing, not the thinking. They're 10x faster than two years ago.

I'm not sure a 10x increase in typing speed makes you a 10x developer.


I've raised this exact point to many team leads throughout my career.

Yet, they unanimously said, they are interested or need to know the progress.

I can't say if thats what they have to report to their managers, but I assume it's something you won't be able to fix from bottom-up.


When I connect my server over SSH, I don't have to rotate anything, yet my connection is always secure.

I manually approve the authenticity of the server on the first connection.

From then, the only time I'd be prompted again would be, if either the server changed or if there's a risk of MITM.

Why can't we have this for the web?


> Why can't we have this for the web?

How do you propose to scale trust on first use? SSH basically says the trusting of a key is "out of scope" for them and makes it your problem. As in: You can put on a piece of paper, tell it over the phone, whatever, but SSH isn't going to solve it for you. How is some user landing on a HTTPS site going to determine the key used is actually trustworthy?

There have actually been attempts at solving this with some thing like DANE [1]. For a brief period Chrome had DANE support but it was removed due to being too complicated and being in (security) critical components. Besides, since DNSSEC has some cracks in it (you local resolver probably doesn't check it) you can have a discussion about how secure DANE is.

[1] https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...


So DNS-adjacent protocols are supposed to be handling this TOFU directory,

but industry behemoths are too busy pushing other self-serving standards to execute together on this?

Am I…close?


What "TOFU directory" ? The whole point of TOFU is that you're just going to accept that anybody's first claim of who they are is correct. This is going to often work pretty well, after all it's how a lot of our social relationships work. I was introduced to a woman as Nodis, so, I called her Nodis, everyone else I know calls her Nodis, her boyfriend calls her Nodis. But it turns out her employer and the government do not call her that because their paperwork has a legal name which she does not like - like many humans probably her legal name was chosen by her parents not by her.

Now, what if she'd insisted her name is Princess Charlotte. I mean, sure, OK, she's Princess Charlotte? But wait, my country has a Princess Charlotte, who is a little girl with some chance of becoming Queen one day (if her elder brother died or refused to be King). So if I just trusted that Nodis is Princess Charlotte because she said so, is there a problem?


Would the issue not be that you would need to trust that first connection?



Cookie banners aren’t annoying enough for you?


For the handful of regularly visited websites, I wouldn't mind.


SSH has its own certificate authority system to validate users and servers. This is because trust-on-first-use is not scalable unless you just ignore the risk (at which point you may as well not do encryption at all), so host keys are signed.

There is quite literally nothing that prevents you from putting a self-signed server certificate. Your browser will even ask you to trust and store the certificate like your client does on the screen that shows the fingerprint.

Good luck getting everyone else to trust your fingerprint, though.


Perhaps our industry should adopt a different approach, that fills in the gap between those.

- You host open-source software on your own hardware.

- You pay a company for setup and maintenance by the hour.


> I saw from the SSD was around 800 MB/s (which doesn’t really make sense as that should give execution speeds at 40+ seconds, but computers are magical so who knows what is going on).

If anyone knows what’s actually going on, please do tell.


Presumably after the first run much or all of the program is paged into OS memory


Yes, or it was still in memory from writing.

The numbers match quite nicely. 40gb program size minus 32gb RAM is 8gb, divided by 800mb/s makes 10 seconds.


I'm not entirely sure but could it be predictive branching?


No, it needs to read the entire executable in order to be correct, it can't skip anything. Therefore the time for the IO must be a lower bound, predictive branching can't help that.


> LLM-generated code should not be reviewed by others if the responsible engineer has not themselves reviewed it.

To extend that: If the LLM is the author and the responsible engineer is the genuine first reviewer, do you need a second engineer at all?

Typically in my experience one review is enough.


Yeesss this is what I’ve been (semi-sarcastically) thinking about. Historically it’s one author and one reviewer before code gets shipped.

Why introduce a second reviewer and reduce the rumoured velocity gained by LLMs? After all, “it doesn’t matter what wrote the code” right.

I say let her rip. Or as the kids say, code goes brrr.


I disagree. Code review has a social purpose as well as a technical one. It reinforces a shared understanding of the code and requires one person to assure another that the code is ready for review. It develops consensus about design decisions and agreement about what the code is for. With only one person, this is impossible. “Code goes brrr” is a neutral property. It can just as easily take you to the wrong destination as the right one.


yes, obviously?

anyone who is doing serious enough engineering that they have the rule of "one human writes, one human reviews" wants two humans to actually put careful thought in to a thing, and only one of them is deeply incentivised to just commit the code.

your suggestion means less review and worse incentives.


anyone who is doing serious enough engineering is not using LLMS.


More eyes are better, but more importantly code review is also about knowledge dissemination. If only the original author and the LLM saw the code you have a bus factor of 1. If another person reviews the bus factor is closer to 2.


The fundamental problem here is shared memory / shared ownership.

If you assign exclusive ownership of all accounting data to a single thread and use CSP to communicate transfers, all of these made up problems go away.


This is equivalent of using a single global lock (and STM is semantically equivalent, just theoretically more scalable). It obviously works, but greatly limits scalability by serializing all operations.

Also in practice the CSP node that is providing access control is effectively implementing shared memory (in an extremely inefficient way).

The fundamental problem is not shared memory, it is concurrent access control.

There's no silver bullet.


Yes, multithreaded problems go away on a single thread.

Is there any way for an external thread to ask (via CSP) for the state, think about the state, then write back the new state (via CSP)?

If so, you're back to race conditions - with the additional constraints of a master thread and CSP.


That would be shared ownership again.


So then I would sell STM to you from the "other end".

Everyone else has multiple threads, and should replace their locks with STM for ease and safety.

You've got safe single-thread and CSP, you should try STM to gain multithreading and get/set.


CSP suffers from backpressure issues (which is not to say its bad, but it's not a panacea either)


Your reasoning seems counter intuitive as back in 2012 Facebook rewrote their HTML5 based app to native iOS code, optimized for performance, and knowingly took the feature parity hit.

https://engineering.fb.com/2014/10/31/ios/making-news-feed-n...


Reminds me of this 2013 story where they moved to native Java for Android and hit limits with e.g. too many methods and instead of refactoring or just not bloating their app they hacked some internals of the Davlik VM while it's running during app install: https://engineering.fb.com/2013/03/04/android/under-the-hood...


Mobile is where the users are. Desktop users are vanishing before our eyes as a market segment.


For some application certainly. Instant messaging of course has many strong point in term of what is to be dealt with. Short messages, photos, quick visios.

But to edit large document, visualize any large corpus with side by side comparison, unless we plug our mobile on a large screen, a keyboard and some arrow pointer handler, there is no real sane equivalent to work with on mobile.


Yeah, but the majority of people who would've been daily desktop or at least laptop users some 10 to 15 years ago now make do with a phone. Most people do not need to visualize any large corpus or edit large documents. Similarly, there's a great deal of phone users who's first interaction with computers was via a smartphone.


A 2012 iPhone and a 2025 Windows PC shouldn't be assumed to have the same tradeoff set just because "web vs native" is found in each description.


It's a tradeoff, different companies are allowed to chose differently or even to change their mind after some time.


Reminds me of Casey Muratori‘s talk on Conway‘s Law: „I always know what I am thinking…“

https://youtu.be/5IUj1EZwpJY?si=b7rG7_vemkiOL8Bp


> The @breakpoint built-in

Inserting the literal one byte instruction (on x86) - INT 3 - is the least a compiler should be able to do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: