Those are all the people that have not yet decoupled "reasoning" from "consciousness" in their own way of thinking. It's admittedly hyperbolic to say "everyone". I love hyperbole on HN. :)
Planning, by definition, takes multiple reasoning steps. A single LLM inference is a fundamental single reasoning step, but it's a reasoning step nonetheless.
It's like I'm saying a house is made of bricks. You can build a house of any shape out of bricks. But once bricks have been invented you can build houses. The LLM "reasoning" that even existed as early as GPT3.5 was the "brick" with which highly intelligent agents can be built out of, with no further "breakthroughs" being required.
The basic Transformer Architecture was enough and already has the magical ingredient of reasoning. The rest is just a matter of prompt engineering.
Yeah, these kinds of discussions always devolve purely into debates about what's the proper definition of words. Especially on HN where everyone has their "Pedantic Knob" dialed up to 11.
You weren't being pedantic yourself. My point is that this discussion is ultimately about the definition of words, and that all by itself, makes the discussion meaningless.
I think a "granule" of "reasoning" happens at each inference, and you think there is no reasoning in a single inference. To discuss it further would be a game of whose definition of any given word is correct.
You're not being pedantic at all. It's a crucial distinction that people try to wave away in favor of hype. Especially since we are so vulnerable to anthropomorphizing.