Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So that helps, as often two people are smarter than one person, but if those two people are effectively clones of each other, or you can cause them to process tens of thousands of requests until they fail without them storing any memory of the interactions (potentially on purpose, as we don't want to pollute their context), it fails to provide quite the same benefit. That said, you also are going to see multiple people get tricked by thieves as well! And uhhh... LLMs are not very smart.

The situation here feels more like you run a small corner store, and you want to go to the bathroom, so you leave your 7 year old nephew in control of the cash register. Someone can come in and just trick them into giving out the money, so you decide to yell at his twin brother to come inside and help. Structuring this to work is going to be really perilous, and there are going to be tons of ways to trick one into helping you trick the other.

What you really want here is more like a cash register that neither of them can open and where they can only scan items, it totals the cost, you can give it cash through a slot which it counts, and then it will only dispense change equal to the difference. (Of course, you also need a way to prevent people from stealing the inventory, but sometimes that's simply too large or heavy per unit value.)

Like, at companies such as Google and Apple, it is going to take a conspiracy of many more than two people to directly get access to customer data, and the thing you actually want to strive for is making it so that the conspiracy would have to be so impossibly large -- potentially including people at other companies or who work in the factories that make your TPM hardware -- such that even if everyone in the company were in on it, they still couldn't access user data.

Playing with these LLMs and attaching a production database up via MCP, though, even with a giant pile of agents all trying to check each other's work, is like going to the local kindergarten and trying to build a company out of them. These things are extremely knowledgeable, but they are also extremely naive.



> two people are effectively clones of each other

I agree you don't want the LLMs to have correlated errors. You need to design the system so they maintain some independence.

But even with humans the two humans will often be members of the same culture, have the same biases, and may even report to the same boss.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: