Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, the vast amount of effort, time and money spent on making the world secure things and checking that those things are secured now being dismissed because people can't understand that maybe LLMs shouldn't be used for absolutely everything.


Someone posted Google's new MCP for databases in Slack, and after looking at it, I pulled a quote about how you should use these things to modify the schema on a live database.

It seems like not only do they want us to regress on security, but also IaC and *Ops

I don't use these things beyond writing code. They are mediocre at that, soost def not going to hook them up to live systems. I'm perfectly happy to still press tab and enter as needed, after reading what these things actually want to do.


> I pulled a quote about how you should use these things to modify the schema on a live database.

Agh.

I'm old enough to remember when one of the common AI arguments was "Easy: we'll just keep it in a box and not connect it to the outside world" and then disbelieving Yudkowsky when he role-played as an AI and convinced people to let him out of the box.

Even though I'm in the group that's more impressed than unimpressed by the progress AI is making, I still wouldn't let AI modify live anything even if it was really in the top 5% of software developers and not just top 5% of existing easy to test metrics — though of course, the top 5% of software developers would know better than to modify live databases.


Security loses against the massive, massive amount of money and marketing that has been spent on forcing 'AI' into absolutely everything.

A conspiracy theory might be that making all the world's data get run through US-controlled GPUs in US data centers might have ulterior motives.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: