How about plain old abuse? People using services to break the law, particularly crimes with victims? Safety risks?
- An uber passenger sees their driver has a gun in the cup holder; they report it to uber.
- A Square merchant is using Square to launder serious money and Square catches it.
- A Dropbox user is uploading child pornography that indicates active child abuse.
In these situations, you think the company should consult the user first before they take a look at PII? Or ask a senior manager? The former is laughable, and the latter is not scalable. Senior manager clearance might work at a small or even midsize company, but for a large tech company abuse happens thousands of times a day. Review must be operationalized.
I would love to see a law that could strike the right balance, but I don't see how it's possible. Accommodating both small and large companies would be very challenging If you mandate the kind of structure that big companies would need, it could cripple smaller companies operational budgets. If you mandate the rules suited for small companies, it fetters large companies at scale. And then how would you enforce it? Another regulatory agency with audit authority?
This kind of regulation is better served by the market, imo, and we've seen that with Uber. In regions where Lyft is a viable option, it has seen significant business increase in the wake of Uber's many scandals. The key is sunlight, which is usually cast by 1. journalists 2. whistleblowers 3. EFF and other watchdogs.
Pretty simple: some departments have access but everything gets logged and audited. If you cannot connect a request to a ticket, you'll get questioned. If you abused your access, you'll be fired immediately. Other industries handle it that way (e.g. banks). I know enough people in banking and know that there's no chance they would ever risk looking up my accounts.
Of course there are exceptional circumstances where access would be granted by a manager without customer approval or routinely by extremely specialised abuse teams.
What I'm talking about is general day to day access to customer data. The analytics, customer service, engineering, etc. teams. It's a different discussion you're trying to have.
>What's the need?
How about plain old abuse? People using services to break the law, particularly crimes with victims? Safety risks?
- An uber passenger sees their driver has a gun in the cup holder; they report it to uber.
- A Square merchant is using Square to launder serious money and Square catches it.
- A Dropbox user is uploading child pornography that indicates active child abuse.
In these situations, you think the company should consult the user first before they take a look at PII? Or ask a senior manager? The former is laughable, and the latter is not scalable. Senior manager clearance might work at a small or even midsize company, but for a large tech company abuse happens thousands of times a day. Review must be operationalized.
I would love to see a law that could strike the right balance, but I don't see how it's possible. Accommodating both small and large companies would be very challenging If you mandate the kind of structure that big companies would need, it could cripple smaller companies operational budgets. If you mandate the rules suited for small companies, it fetters large companies at scale. And then how would you enforce it? Another regulatory agency with audit authority?
This kind of regulation is better served by the market, imo, and we've seen that with Uber. In regions where Lyft is a viable option, it has seen significant business increase in the wake of Uber's many scandals. The key is sunlight, which is usually cast by 1. journalists 2. whistleblowers 3. EFF and other watchdogs.