Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the idea is supposed to be that you have some agency when you share your data with a friend. You can choose whether you want to entrust your data to this friend or not, and the onus is on you to properly vet them for transparency and trustworthiness. Conversely, if a stranger leaks your data, that isn't on you at all (beyond you allowing the stranger access to your data, but that's stretching the metaphor a bit). I don't find this argument particularly compelling though, because the typical Facebook user definitely didn't know what they were getting into when they signed up years ago. I find it hard to fault people for improperly vetting entities that are beyond their area of expertise, and I don't think this mistake on the user's part makes Facebook's practices any less egregious.


There's a real question about consent here. I know a lot of techies that don't understand ML that well (not even better than the general public). I saw a comment in another form "ML is like QM, if you think you understand it you don't.", And I think that's fairly accurate and Feynman would have loved it. So

1) how can you give proper consent if it takes a great deal of expertise to understand what you're getting yourself into (never mind that complexity doesn't come across, but that's a good question too. About conveying complexity).

2) can consent even be given if we can't be fully informed? Or rather how informed do you need to be to give consent?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: