Hacker News new | past | comments | ask | show | jobs | submit | more miraculixx's comments login

To plan: to think about and decide what you are going to do or how you are going to do something (Cambridge Dictionary)

That implies hire-other reasoning. If the model does not do that, which it doesn't, that's quite simply the wrong term.


Came here to say this. Their paper reels of wishful thinking and labeling things in terms they prefer it would be. They even note in one place their replacement model has a 50% accuracy which is simply a fancy way to say the model's result is completely by chance, and it could be interpreted one way or another. Like throwing a coin.

In reality all that's happening is drawing samples on the joint probability of the tokens in the context window. That's what the model is designed to do, trained to do - and that's exactly what it does. More precisely that is what the algorithm does, using the model weights, the input ("prompt", tokenized) and the previously generated output, one token at a time. Unless the algorithm is started (by a human, ultimately), nothing happens. Note how entirely different that is to any living being that actually thinks.

All interpretation above and beyond that is speculative and all intelligence found is entirely human.


Yes GPUs process a (one) computational task on a vast array of data in parallel. But it cannot process two independent tasks concurrently (except, perhaps, by reducing compute power for each task).


MCP is essentially a request-response convention. We already have one like that, and it is very versatile and well-described: REST.

MCP is just hype. Hot air.


If we are to get to AGI why do we need to train on all data? That's silly, and all we get is compression and probabliatic retrieval.

Intelligence by definition is not compression, but ability to think and act according to new data, based on experience.

Trully AGI models will work on the this principle, not on best compression of as much data as possible.

We need a new approach.


Actually, compression is an incredibly good way to think about intelligence. If you understand something really well then you can compress it a lot. If you can compress most of human knowledge effectively without much reconstruction error while shrinking it down by 99.5%, then you must have in the process arrived at a coherent and essentially correct world model, which is the basis of effective cognition.


Fwiw there's highly cited papers that literally map AGI to compression. As in they map to the same thing and people write papers on this fact that are widely respected. Basically a prediction engine can be used to make a compression tool and an AI equally.

The tldr; if given inputs and a system that can accurately predict the next sequence you can either compress that data using that prediction (arithmetic coding) or you can take actions based on that prediction to achieve an end goal mapping predictions of new inputs to possible outcomes and then taking the path to a goal (AGI). They boil down to one and the same. So it's weird to have someone state they are not the same when it's widely accepted they absolutely are.


“If you can't explain it to a six year old, you don't understand it yourself.” -> "If you can compress knowledge, you understand it."


If you squint hard enough he's got a point (just not the way he thinks)


There are more cups made out of paper in the world that any other material [0]. Ergo, paper makes the strongest, longest lasting cups.

He's a member of ISO C WG14. I'm hoping it's satire.

[0] I made that up. I hope it's true. The AI's agree it's likely true.


bogrod now has a console UI. Using bogrod, reporting on CVEs is a breeze. #devsecops


Some people have been told "you are better than anyone else" for their whole life. So that's their base line. Guess what


That's a very weird take. Suerely you would expect some skills from a CS graduate?


It's one thing to analyze it. It's an entirely different thing to let a machine of dubious abilities create new DNA.


Isn't DNA in itself a machine of dubious abilities? It's only functional because what functions is what survives, imagine the amount of 'unsurvived' because of how shit the code is.


machines that undergo accelerate evolution, I would trust them more under rigorous guidelines

I just need to be able to test the guidelines and results. Clinical trial process to make an objective decision from that point.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: