Hacker Newsnew | past | comments | ask | show | jobs | submit | batshit_beaver's commentslogin

In theory, there's no difference between theory and practice.

In practice...


But carrying bunker busters via drones is a lot more expensive than hand grenades.

Yeah, akin to talking to a rubber ducky

I like to agree as sorta yes but also really no because it's a rubber ducky that doesn't give you the chance to come to your own conclusion and even if it does it has you questioning it.

i find its the opposite, LLMs can be made to agree with anything.... largely because that agreeability is in their system prompt

Yeah, this. Every conversation inevitably ends with "you're absolutely right!" The number of "you're absolutely right"s per session is roughly how I measure model performance (inverse correlation).

Ha, touche!

> This is no different then carpentry. Yes, all furniture can now be built by machines. Some people still choose to build it by hand. Does that make them less productive? Yes.

I take issue even with this part.

First of all, all furniture definitely can't be built by machines, and no major piece of furniture is produced by machines end to end. Even assembly still requires human effort, let alone designs (and let alone choosing, configuring, and running the machines responsible for the automable parts). So really a given piece of furniture may range from 1% machine built (just the screws) to 90%, but it's never 100 and rarely that close to the top of this range.

Secondly, there's the question of productivity. Even with furniture measuring by the number of chairs produced per minute is disingenuous. This ignores the amount of time spent on the design, ignores the quality of the final product, and even ignores its economic value. It is certainly possible to produce fewer units of furniture per unit of time than a competitor and still win on revenue, profitability, and customer sentiment.

Trying to apply the same flawed approach to productivity to software engineering is laughably silly. We automate physical good production to reduce the cost of replicating a product so we can serve more customers. Code has zero replication cost. The only valuable parts of software engineering are therefore design, quality, and other intangibles. This has always been the case, LLMs changed nothing.


This seems bizarre. Only reason my family bought a Tesla is thanks to the ev tax credit. Without it there are far better options.


What's interesting is that the strength of US dollar vs other currencies is barely budging in the meantime. Seems like everyone else is inflating away their problems too, so it all evens out in the end (unless you're poor with no assets, in any country).


Swiss franc is strongest of fiats — but still fiat.

Many historians of many generations contend that debased currencies fail first slowly, then fast.

Watching gold has been eye-opening to all living generations. Just a few months ago my lawyer didn't believe me when I told him "gold just broke four thousand dollars!" [day of this post gold hit ATH of $5400]


That's what the third shoe is for - aircraft carriers.


What's stopping you from writing code by hand even today? I mainly use LLMs for researching and trying possible paths forward, but usually implement the suggested solution myself specifically so that I fully understand the code (unless it's a one-liner or so, then I let the LLM paste it in).


Because I can't justify it. While I do love the craft, and I can do this, I work with other people and I can't convince other people to not use LLMs to do their daily work. So, while I'll be writing things by hand and using the LLM to suggest which way to go, they'll be submitting PR after PR of AI-generated code, which takes much more of my time to review.


Unfortunately IDEs are not yet directly connected to our minds, so there's still that silly little step of encoding your ideas in a way that can be compiled into binary. Playing the broken telephone game with an LLM is not always the most efficient way of explaining things to a computer.


Of course not. It’s a tool.


You use LLMs to _discover_ how to approach important problems. You don't necessarily need to use the output verbatim. Same as StackOverflow and Google.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: