Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When I did an internship at a national lab, a lot of the hard rules about security relied on the fact that you had gone though their hiring process and would follow the rules. There were different access levels, for sure, but only like 2 or 3. You might have "had access" but you shouldn't be anywhere you didn't have a good reason for being.

Lyft should be checking on this, running audits and whatnot, but they also should be setting good policy and culture to not abuse access.

Basically, I think its reasonable to both allow many people access and expect them to not abuse it.



> Basically, I think its reasonable to both allow many people access and expect them to not abuse it.

Indeed. The FCRA accounts for bored clerks looking up random peoples' credit history.

Just because you have access to something doesn't mean you're allowed to touch it without a valid business reason.

I'm no fan of regulation but the wild west of PII is long past needing to be tamed. Companies need to be held responsible for their intelligence and how it gets used.


"Just because you have access to something doesn't mean you're allowed to touch it without a valid business reason."

Then you should not have access to it? People will touch them if they can. That's why Access Control rules exist.


That might be how we deal with children who can't handle responsibility, but the absence of technical controls for every nuance of life is why ethics and code of law exists for adults.

Access controls are not a substitute for maturity.


> Access controls are not a substitute for maturity.

But maturity is not a substitute for access controls either.

In any organization of some size, no matter how much you hire for "maturity", eventually people will slip past who have all kinds of reasons they'll be able to justify to themselves for deciding it's too tempting to look at things they shouldn't.


Agreed, too bad humanity's level of maturity as a whole is very far away from ideal. If everyone did what they should we would probably be living in an utopia. But that's not the case.

I don't mean that everything should be super-locked down to the point where it's inaccessible, just tweak it enough to not be misused.

The idea of an audit trail is good, since you can go back in history and make any misbehaving parties accountable. Or design a system where the client authorize a rep to look into her records ---just like banks do when you ask for your balance.


> children who can't handle responsibility

Isn't the real concern bad actors? e.g., LOVEINT


Well, does Lyft's hiring process optimize for maturity? Do they often hire people in their 40s, for example?


There's a bit of pragmatism involved in the level of control.

If you add too much friction to the process of accessing information, then it can actually impede on actually handling user support. For example, having access to someone's ride history when trying to resolve a dispute seems relatively normal.

Of course in Lyfts case it seems pretty clear that there can be more programatic locks. And auditable logs are able very good idea in general.

But programatic locks are tricky. How do you transform and e-mail from a user confirming permission to history into an unlock code?


If only we had modern phone infrastructure that could actually transfer useful info to the appropriate person... Instead we have terrible phone support that requires me to repeat the same info 3 times.

That info should be unlocked the millisecond I am connected with a rep . It's not a moonshot


Yeah totally. It's doable

My feeling is that stuff is doable, but hard-ish. For example, for this case now you're writing something to interface with the phones? How do you know the phone number is for a certain client?

Though I definitely see someone writing a thing where your ticketing/support system grants partial data access, you end up either making the support system pull in information from the DB... or your DB access controls being controlled through the support system.

the latter one can potentially introduce security issues. The former one's easier but you can easily run into the "oh, this information's not gettable through the ticketing system".


That last bit is actually relatively straightforward - you send them a response with a link they click on taking them to a permission prompt.


Too much reliance on programmatic access controls causes people to think “if it’s allowed by the controls, it’s allowed by common sense” which is rarely the case.


On the flipside, systems that are too cumbersome to use because of access controls lead people to do things like maintain shadow systems in Excel spreadsheets just to get their work done. Of course with no security at all.


Couldn't this come under violation of terms of service under Lyft's end and open them up to class action, so there's no need for new regulation at least for this current issue.

Not that I'm against new regs, I'm for it.


This was how it worked when I worked in admissions during college. You had access to every applicants' information, grades, essays, etc., as well as counselor feedback. But you were told that if you looked up yourself, someone you knew, or any celebrities, then you could be fired.

I don't know if there were automated checks for that kind of thing, but everyone knew there was a line you didn't cross.


At Lyft people did think there were automated checks, did know there was a line that shouldn't be crossed, and yet there was rampant abuse. Don't you suspect that many of the students in your position abused their access?

I think companies should be responsible for implementing effective security, whether that means preventing improper access or at least detecting it and punishing it after the fact, not just establishing a "culture." The most dangerous people, the ones who commit violent crimes, aren't limited by culture anyway, because they despise norms and have very different perceptions of risk compared to most people.

In your case, your fellow student workers might simply have not felt safe sharing their crimes with you. "Naughty" behavior can be taboo yet widespread.


If you have rules but don't enforce any consequences for breaking them, the rules pretty quickly get ignored.

If Lyft had fired a few rulebreakers early on, everyone else would know they were serious.


Huh, that's an interesting point. Now that you mention it you're probably right that it was more common than I was aware of.


> I think companies should be responsible for implementing effective security

That's what the EU data protection law requires! And there are high fines, and new abilities coming into force in May!


Whereas folks I worked with at a support vendor for AT&T HomeZone, when that was a thing, regularly looked up celebrities’ private phone numbers in the unified customer systems with no repercussion, despite the same onboarding spiel. HomeZone was unique in that support reps by necessity had access to AT&T, Dish, and Yahoo! (email) CRM, which covers a very broad range of people and activities; I didn’t do the looking, but I overheard a sampling of notables and legislators who subscribed to the “500s” (Dish lingo for porn at the time, don’t know if they’ve moved the channels since).

Yahoo! was the only one that limited access reasonably. It always struck me as odd that auditing didn’t pick up that query behavior. For your main job, PeopleSoft would take care of everything and limit you to who you needed to see, but there were a plethora of other systems and places to look.

This type of thing certainly isn’t limited to Lyft.


My "elite" college had crisis counselors, student volunteers. Guess what happened if you dated one of them. They searched the counseling files to look for dirt on you.


In hospitals in the U.S. the way it works in some is nurses can view a lot of the patients charts (including VIP). And then someone is supposed to audit who viewed those VIP patients (celebrity or what not) but every hospital is different and it's a mess.


Here in Sweden you can view any patients info, and there is supposed to be audits to check that a doctor only ever checked relevant patient journals.

Always a few cases now and then of people getting caught checking friends, family and foes.

Pretty sure that auditing is completely separate from the caregiver.


In Stockholm, there is one journal system everyone is mandated to use, written in APL (was called Take Care but could have changed name), and completely without access control.

I believe each institution get printed access logs sent to them, which is never looked at.


Here in Canada if you have access to the central medical records you can look up anyone but (a) if you are not a doc and are not assigned to the case or (b) you are looking up yourself or a family member, you immediately get a call and get fired on the spot.

(Source: Wife works at the hospital and has seen some people get fired shortly after unauhorized access.)


My hospital system has similar policies in place. I like to tell people, "If I look at your record, I'll probably get fired. If I look at my own record, I will absolutely get fired."

Of course, both of them are prohibited (without a valid business reason), but the latter is easier to detect in an automated fashion.


Everyone should be able to look at their own record...


Just curious - why is looking up yourself an offense?


Not infrequently there are notes in patients charts that only care providers can see. This practice is becoming frowned upon in some circles however, in lieu of more transparency. I imagine it being really useful for certain scenarios (e.g. mental illness or documentation a sensitive domestic relationship, where you wouldn't want a potentially abuse family member to see comments about their behavior in a relative's chart).


You don't really own your medical record, also doctors add notes to your medical record that you may not like. For example your medical record may say that you don't follow up with medication, abuse drugs, have psychiatric/personality problems, etc. which patients could be sensitive about.


That's not true. In the US at least as an adult you have a legal right to see everything in your medical record.


And trying to exercise that right by using tools you have access to as an employee of your own health care provider rather than going through the proper channels that any other patient goes through to request such access (which involve the employer having documentation of the request, and auditability as to the purpose of any access involved in serving that request), is likely to violate internal procedures designed both to protect PHI and assure that all access is within job function.


Almost, but not quite. There are exceptions, such as psychotherapy notes. (Cite: https://www.hhs.gov/hipaa/for-professionals/privacy/laws-reg... section “Access”.)


Abuse of employer resources for personal purposes, probably.


You would think that if they have the ability to audit at that level and with such prompt responses they would have the resources to lock down the systems properly and to implement a consent policy that works. Allowing everybody access is a bit like binding the cat to the bacon and then getting upset because the cat can not be trusted with bacon.

Better to keep the cat and the bacon separate, the temptation to peek is large and if there is one thing I know about people then it is that curiosity is a pretty common affliction.

And that is assuming that those accesses are on purpose, people can make honest mistakes as well and they will also look like unauthorized access.


Except putting barriers can cause even worse problems. If I can’t look up someone’s allergies because the system doesn’t think I should, fuck them right?


Exactly, barriers to looking up a patient's information can be fatal. I need to give someone medication now to stabilize them. What medications are they on now? Can't look it up? Better give it to them and hope there's no adverse reaction.

Better to avoid that situation and implement auditing while making sure people know the rules are enforced.


"override authentication" -> You have chosen to override the authentication protocol, you are logged in as John Doe, all your actions will be subject to internal affairs review, continue yes / no?


That tends to happen when the user logs in, so they'll probably see it multiple times per day. I think that's pretty standard. The system doesn't have to be the wild west. But if someone can click through an authentication override then it's not really doing anything.


At least that is what they tell everyone.


This system broke down at UCLA a few years ago. They ended up paying out quite a bit of money when it was revealed that hospital staff had viewed the medical records of celebrity patients.


My wife is a music therapist, and her previous employer had a contract with a local hospital system. During one visit, she (or a colleague—I don't recall) was working with someone with the same last name, but no relation; this set off an automatic alert in the EMR, leading to a follow-up inquiry. VIP records have similar alerts configured to verify that the accessors have legitimate business reason.


I’m at a financial services firm, and we have an entire internal risk department to ensure employees aren’t exceeding their authority. Surfing the wrong websites? Badging in and out at abnormal hours? Accessing internal apps in ways you shouldn’t? Access immediately flagged for human intervention and you’re locked out. Our data scientist team improves on the heuristics constantly.

At some point, organizations with data have to learn how to manage IAM [1] properly.

[1] https://en.wikipedia.org/wiki/Identity_management


The FFIEC considers your heuristic system “Innovative” according to the Cybersecurity risk assessment methodology. Certainly not typical for a financial institution. Pretty cool stuff though!

https://www.ffiec.gov/pdf/cybersecurity/FFIEC_CAT_May_2017.p... (Page 39)


"Accessing internal apps in ways you shouldn’t?"

At a high level, how do they do that? I can only think of a bunch of rules, and that will have to be tweaked endlessly to deal with edge cases.


That sounds like fantastic place to work :), how come you are not locked out for posting on HN? (I am assuming you are doing this during work hours)


Probably is on the data science team...


I’m on our cloud security team; our leadership chain gives us wide latitude (Hacker News A-OK) since our entire job is to be subject matter experts. Literally “Know All The Things” was how my job req was described to me.


> Basically, I think its reasonable to both allow many people access and expect them to not abuse it.

I couldn't disagree more. Eventually, you're going to hire an idiot (and/or budding rapist). When you have PII of this nature, if you're going to allow lots of people access, you need individual access controls, logging, and most importantly, auditing of the aforementioned data. And auditing may not be enough; you probably need individual inspection and approval to eg look up user info not tied to a ticket you're processing.

Particularly after seeing the Uber god view scandal, there's just no excuse not to have this basic stuff in place.

I worked for a much much smaller startup handling data that was significantly harder to tie to a actual person and we did the above.


I remember a briefing when I worked for BT in the UK some one looked up for a mate his exes new address - who then got murdered.

There was also the case involving hit men who found the address of a targets mom and dad who where also killed by bribing some one.

BT took security v seriously and you had better hope if you did something bad that the cops or the even the secret service (MI5) got to you before the internal security team did.


>BT took security v seriously and you had better hope if you did something bad that the cops or the even the secret service (MI5) got to you before the internal security team did.

I would rather be "dealt with" by BT than with the cops or secret service. BT can fire you, the cops and SS can take away your rights (with due process)


As a former PTT IB or SD was descended from the unit in the GPO that dealt with stealing for the post so had some odd quasi legal standing - its also where the secret squirrels worked.

They have a bad reputation as in the bad old days some of the confessions involved falling down stairs, which I was hinting at :-)


It should be easy to audit and enforce. Just have to scare people a bit into respecting the system.


Better to not expect anything from anyone, and you wont be disappointed.

What you do is simply restrict data to employees on a need to know basis. Not difficult to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: