Hacker Newsnew | past | comments | ask | show | jobs | submit | more WhyIsItAlwaysHN's commentslogin

I love the idea of the colored links for navigation in your summary. Thanks for the inspiration!


He is the CEO of a company, he can personally ask someone to do it for him.


But implementation was too messy for someone with expertise to do it on CEO's request


O3s story is not amazing but it sure is orders of magnitude more interesting than your example:

https://chatgpt.com/share/68282eb2-e53c-8000-853f-9a03eee128...

I don't think it's possible to generate an acceptable story without reasoning.

That is not to say that I disagree with you. I would prefer to read human authors even if the AI was great at writing stories, because there's something alluring about getting a glimpse into a world that somebody else created in their head.


> I don't think it's possible to generate an acceptable story without reasoning.

If I look back at any article, book, movie, or conversation that I liked, it always had this essential ingredient: it had to make sense, AND it had to introduce some novel fact (or idea) that led to implications that were entertaining somehow (intriguing, revelatory, amusing, etc).

Would this be possible without the author having some idea of how reasoning works? Or of what facts are novel or could lead to surprise of some kind? No is the obvious answer to both. Until I see clear evidence that LLMs have mastered both logic and the concept of what knowledge is and is not intriguing to a human, I foresee little creative output from any LLM that will 'move the needle' creatively.

Until then, LLM-generated fare will remain the uninspired factory produce of infinite monkeys and typewriters...


Cool idea, I'll update it


Very often the code looks like:

fn actualLogic(param, param, param) ...

switch (some Condition) { case a: actualLogic(Foo, bar) ; case b: actualLogic(bar, Foo) ; ... }

It might be expressed with if statements, pattern matching, function calls etc., but most of the code is boilerplate.

From my experience in a C# codebase most of the code which completes correctly is this boilerplate.

O3/Sonnet/Gemini are able to write actual logic in chat/agent mode sometimes but then the problem is that they rewrite too much, which would count as AI generated code.

These two factors probably play a huge role in the 30% if it's counted in any accurate way.


I worked like that for a year in uni because of RSI and it's very easy to get voice strain if you use your voice for coding like that. Many short commands is very tiring for the voice.

It's also hard to dictate code without a lot of these commands because it's very dense in information.

I hope something else will be the solution. Maybe LLMs being smart enough to guess the code out of a very short description and then a set of corrections.


Would be nice to be able to do something like write a function signature and then just say “fill out this function,” with it having the implicit needed context, as though it had been pairing with you all along and is just taking the wheel for a second. Or when you’ve finished writing a function, “test this function with some happy path inputs.” I feel like I’d appreciate that kind of use, which could integrate decently into the flow state I get into when programming. The current suite of tools for me often feels too clunky, with the need to explicitly manage context and queries: it takes me out of my flow state and feels slower than just doing it myself.


So it takes 10 min until you've gone to the drastic solution? With this time-frame it would be risky to go the bathroom, not go to a movie. Also even the backup sounds like a primary in this scenario.


Sure, but the assumption here is that primary and backup (edit: probably, ie. they're not coordinating this) aren't going to the bathroom at the same time. It's also based on the idea that alerts are extremely rare to begin with. If you're expecting at least one page every rotation, that's way, way too often. Step one is to get alerts under control, step two is a sane on-call rotation.


The year is 2025. You have an internet browser. Look up the concepts you don't understand.

How are you so certain about a phenomenon not existing when you have no idea about it? Do you apply the same filters to everything else you don't understand?

Also be careful with wishing for "progress" to be rolled back. First it's other peoples' freedom to live their lives how they want to and then it's yours.


>Look up the concepts you don't understand.

Looking it up wouldn't help, because these aren't objective phenomenon. They're just nonsense. It'd be like trying to understand what it feels like to be a schizophrenic by reading the wikipedia page. My inability to understand is just a way of framing something for you... to show that, in many ways, we don't even live on the same planet. And I don't want to live on yours. I don't want to become the schizophrenic, so to speak.

>How are you so certain about a phenomenon not existing when

When you're outside of it, it's plainly clear. To the person who isn't a schizophrenic they can be 100% certain that there are no voices to hear. It's not the CIA blocking the mind-waves beamed down from the government satellites... it just doesn't exist. Not that my assertions can convince the crazy guy.

>First it's other peoples' freedom to live their lives how they want to and then it's yours.

If you bothered to reason through that, you'd see it for being false. That must not be true for everyone, even in the imaginary scenarios you have nightmares about. For some, at least, sometimes they really do just stop when they're done doing what they wanted to do, and cross no other lines. I'm pretty confident which side of the line I stand on. Find more sophisticated propaganda, I suppose, is the only advice I have for you.


Location: EU Remote: Only Technologies: C#, Roslyn, SQL Server, Azure,Typescript, VSCode extensions, compilers,low level stuff, dotnet internals Experience: 2 years compiler engineering and 5 years solving the performance problems everyone hates Email: 3o7zwdz04@mozmail.com


I think you're semi-right here, but the term precedence is ambiguous.

Left recursion causes rules to be ambiguous. In `<S> | a`, it's always ambiguous if the input has ended. Thus the data structure that would be traversed can be a circular graph. In `<S> := a | <S>, S := ε', the rules are unambiguous and the data structure that is traversed has the shape of a tree.

You get precedence rules by defining the shape of this tree, thus which rules will be visited before which.

However it doesn't have to be strict precedence. Some rules can have an ambiguous order as long as the execution still follows the shape of a tree.

For example, in my simple rule above, it doesn't matter if you visit the ε rule or the other rule first in your implementation (although it's less efficient to visit ε first).

So, I think precedence is a side-effect of doing this transformation.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: