Hacker Newsnew | past | comments | ask | show | jobs | submit | slaterbug's commentslogin

Do you happen to have a copy of that article? I’d love to read it.

No, but I could give you several articles that link to that article while talking about how great it is.

> Meanwhile, we have LLMs accepting bullshit tasks and completing them.

Would you mind elaborating on that? I’m not quite sure what you mean.


Bullshit tasks are the modern TPS reports. Tasks that create no real value to anyone, but are necessary because management likes to think it is progress.

Like a supercharged version of rubber duck debugging.


I don't have much to add myself, but there was a bit of discussion around this back in August that you might be interested in: https://news.ycombinator.com/item?id=44831811


Wow! Didn't know Immich's Cursed page had already a dedicated post on HN.

I love reading about opensource drama, especially if it's some technology I don't use directly, it's like watching a soap opera.


This user makes money of off how many downloads their packages receive.

https://github.com/A11yance/axobject-query/pull/354#issuecom...


What a dumpster fire.

Is he really being paid per download, or is he just being sponsored? It’s not clear if either would imply some form of malicious intent either.


There's a number of "robotics and embodied AI" ETFs out there that should show up with a quick search. I don't have an opinion as to their quality so you'd have to do your own research.


I feel called out :)


I see u


What evidence is there that AGI will come “soon”?


Or "ever"?

(I'm not denying the possiblity. I'm proclaiming a lack of evidence.)


I’ve been daydreaming lately about what the fundamental limits of “intelligence” could be, something like the concept of computability but for AI, or even biological brains.

Though I will say, surely the existence of the human brain (which by definition is general intelligence), suggests that creating AGI is fundamentally possible?


Sure, it's possible - as you say, we have an existence proof. We don't know how to do it any other way, though. None of the people who claim that they or somebody else is on the trail has produced any evidence that they are correct.


What evidence did we have that LLMs would be such transformative techs before they were suddenly introduced, and have such surprising behaviors? Not sure we need to always be looking for evidence for potentially surprising and disruptive tech


They can "feel it", like people "felt" we'd have commercial space flight "soon" after we put people on the moon, it's all delusion and wishful thinking.


It's worse than that, really, because there was at least a fairly obvious _path_ there, even if the economics were, to say the least, shaky. For AGI... not so much.


Yeah, if energy continued to get exponentially more plentiful like it used to then a casual trip to the moon or flying cars today wouldn't have been out of the question.

People imagined a future where everyone had their own personal fusion reactor to power their devices with infinite energy, that world didn't happen but the exponential rate energy technologies used to develop in made that seem feasible.


It's not an energy question, for the moon it's a "why the fuck would we even do this" question and for flying car it's a thing about bald monkeys being on average quite stupid and not responsible.


Yeah, more or less from the start, commercial space was always more of a “could, but won’t” thing than anything else.


That’s a nice prospect. What worries me is the point at which I’m no longer a required part of the problem solving process.


You don't program at all now? It's all generated?

I only ask because I find myself on the tools most of the time still. The difference in how different people experience this tech is astounding sometimes.


I debug a lot. LLM tools still can't figure how to use a debugger. That's about it. I'm sure they will soon.


Claude Code with Tidewave mcp as well as Playwright mcp get pretty close to the debug cycle.


Scary.


A great idea if you're looking to intentionally sabotage AI.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: