Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is that there is really like no middle ground. You either get essentially very fancy search engines which is the current slew of models (along with manually coded processing loops in the form of agents), which all fall into the same valley of explicit development and patching, which solves for known issues.

Or you get something that can actually reason, which means it can solve for unknown issues, which means it can be very powerful. But this is something that we aren't even close to figuring out.

There is a limit to power though - in general it seems that reality is full of non computationally reducible processes, which means that an AI will have to simulate reality faster than reality in parallel. So all powerful all knowing AGI is likely impossible.

But something that can reason is going to be very useful because it can figure things out that haven't been explicitly trained on.



> very fancy search engines

This is a common misunderstanding of LLMs. The major, qualitative difference is that LLMs represent their knowledge in a latent space that is composable and can be interpolated. For a significant class of programming problems this is industry changing.

E.g. "solve problem X for which there is copious training data, subject to constraints Y for which there is also copious training data" can actually solve a lot of engineering problems for combinations of X and Y that never previously existed, and instead would take many hours of assembling code from a patchwork of tutorials and StackOverflow posts.

This leaves the unknown issues that require deeper reasoning to established software engineers, but so much of the technology industry is using well known stacks to implement CRUD and moving bytes from A to B for different business needs. This is what LLMs basically turbocharge.


Right, so search engines, just more efficient.

But given a sufficiently hard task for which the data is not in the training set in explicit format, its pretty easy to see how LLMs can't reason.


Lmao no, what Ive described is a reasonably competent junior engineer.


To be a competent engineer in 2010s, all you really had to do was understand fundamental and be good enough at google searching to find out what the problem is, either for stack overflow posts, github code examples, or documentation.

Now, you still have to be competent enough to formulate the right questions, but the LLMs do all the other stuff for you including copy and paste.

So yes, just a more efficient search engine.


Right, so search engines, just more efficient.


I don’t know… Travis Kalanick said he’s doing “vibe physics” sessions with MechaHitler approaching the boundaries of quantum physics.

"I'll go down this thread with GPT or Grok and I'll start to get to the edge of what's known in quantum physics and then I'm doing the equivalent of vibe coding, except it's vibe physics"


How would he even know? I mean he's not a published academic in any field let alone in quantum physics. I feel the same when I read one of Carlos Ravelli's pop-sci books, but I have fewer followers.


He doesn’t. I think it’s the same mental phenomena that Gell-Mann Amnesia works off of.

That interview is practically radioactive levels of cringe for several reasons. This is an excellent takedown of it: https://youtu.be/TMoz3gSXBcY?feature=shared


This video is excellent and also likely opaque to pretty much most valley tech-supremacy types.


Dashed with a sauce of "surrounded by yes-men and uncritical amplifiers hoping to make a quick buck."


>In ordinary life, if somebody consistently exaggerates or lies to you, you soon discount everything they say.

It feels like this is a lesson we've started to let slip away.


This says more about Kalanick than it does about LLMs.


Quantum physics attracts crazy people, so they have a lot of examples of fake physics written by crazy people to work off.


If I were a scammer looking for marks, I'd look for people who take Kalanick seriously on this.


I wouldn't trust a CEO to know their ass from their face.


Finally, an explanation for my last meeting!8-((




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: