This kind of system usually comes into place because an organization combines a number of different pieces of software to make a cohesive whole, rather than designing a single all-in-one solution with perfect foresight.
While the customer can open a support ticket in the ticketing system, that system is probably not directly connected to the service that allows support agents to view ride data. This may be because either of these systems weren't (or at least weren't initially) built in-house, or just because this kind of thing is harder to implement and wasn't a design priority back in the day.
So a support agent sees a customer issue, and then goes looking for relevant data in another system. If the issue ends up being a bug, and that bug seems to depends on customer data, engineering might need to look at it in order to reproduce and resolve it. It's great if the support person or engineer can look only at the very narrow slice of data they actually need to address the problem they're working on, but often that's not possible (for example, the system could be set up to tightly control ride history access, but once you've been granted that privilege, you can access the _entire_ ride history for that customer.)
The company ends up creating policies for data safety, and teams to enforce those policies, but there's always a balance between being able to quickly triage and address a problem (be it a customer reporting that they feel unsafe and need help quickly, a hard to find bug that only seems to happen for people with last names having certain accented characters, or a customer complaining that certain parts of their ride history have mixed-up addresses) and having to go through established channels to justify your need for access. And it's hard to perfect this process.
I'm not claiming any of this is ideal, and all in all it sounds like some of Lyft's policies were too lax or too loosely enforced. But I definitely understand _how_/_why_ it could happen.
While the customer can open a support ticket in the ticketing system, that system is probably not directly connected to the service that allows support agents to view ride data. This may be because either of these systems weren't (or at least weren't initially) built in-house, or just because this kind of thing is harder to implement and wasn't a design priority back in the day.
So a support agent sees a customer issue, and then goes looking for relevant data in another system. If the issue ends up being a bug, and that bug seems to depends on customer data, engineering might need to look at it in order to reproduce and resolve it. It's great if the support person or engineer can look only at the very narrow slice of data they actually need to address the problem they're working on, but often that's not possible (for example, the system could be set up to tightly control ride history access, but once you've been granted that privilege, you can access the _entire_ ride history for that customer.)
The company ends up creating policies for data safety, and teams to enforce those policies, but there's always a balance between being able to quickly triage and address a problem (be it a customer reporting that they feel unsafe and need help quickly, a hard to find bug that only seems to happen for people with last names having certain accented characters, or a customer complaining that certain parts of their ride history have mixed-up addresses) and having to go through established channels to justify your need for access. And it's hard to perfect this process.
I'm not claiming any of this is ideal, and all in all it sounds like some of Lyft's policies were too lax or too loosely enforced. But I definitely understand _how_/_why_ it could happen.