Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What are you worried about? That someone will kidnap you, force you into an MRI machine, force you to train it for hours on your neural firing patterns, and get the password to your bank account this way?

I'm trying to figure out which part of this threat model AI makes a meaningful difference in. If they already have you captive, the xkcd-certified $5 wrench is cheaper.

"End of private thought" doesn't seem to be on this tech tree, unless you posit being able to scan people secretly or against their will.



I'm not worried about any of that at all. None of what I've said has some unstated "therefore, this is bad" clause to it. I'm just pondering the progression of technology here.

If someone comes up with a technology that allows people's minds to be read without their cooperation, then I'd start to worry -- but I see nothing in this that indicates that's where things are going.

Also, the idea of building my own MRI appeals to me, so my mind went on a little tangent about how to make that happen.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: