When I joined Google I started thinking to myself "geez, we care so much. why does HN hate us?"
It clicked for me when someone pointed out that even if you trust group of employees X with a mountain of data, nothing prevents group of Y from eventually selling it. And after what I saw my last couple years, I'm utterly convinced some McKinsey-ite will be telling 2050's CEO that's a great idea and in fact the moral option. Maximize shareholder value => stonk go up => Americans have safe retirements.
Why am I talking about Google?
People talk past eachother on this stuff. The problem isn't that "have they ever done anything bad?", it's that the incentive structure is set up such that something that crosses the line will eventually happen. They have an incentive to keep the customer happy.
And as the article, and comments below from 1.5 hours before you posted point out, there's no room for argument on that: this already happened. A court threw out info because it was illegally obtained.
These devices don't send audio over the network unless a shooting-like noise is detected. I get it, if the state can listen to all of our communications that are immediately coincident with shootings, what privacy will any of us have?
But in all seriousness, you should actually read the section for that court case. Here's all of the text for that case from the link:
"""
Commonwealth v. Denison, No. BRCR2012-0029 (Mass. Super. Ct. Oct. 7, 2015)
"ShotSpotter is a listening and recording system that runs 24/7, attuned to the sound of gunfire. When the system hears gunfire, or what it recognizes as gunfire, it locates it, reports it, preserves the recording, and send the recording to the customer within seconds.” The defendant, charged with first degree murder, moved to suppress a recording made by ShotSpotter of an verbal exchange among numerous individuals before and after the fatal gunshots. The court rejected that the argument that the defendant had a reasonable expectation of privacy under the Massachusetts Declaration of Rights because the exchange was “audible by anyone passing and was in fact heard by a crowd of neighbors and other witnesses.” However, the court found that the exchange was an “oral communication” and that the recording was a prohibited “interception” under the Massachusetts Wiretap Act because the defendant had no knowledge that the exchange was being recorded. The court also found that the interception was “willful” because the police had “purposefully directed the placement of the sensors.” The court granted the motion to suppress: “the continuous secret audio surveillance of selective urban neighborhoods ** is the type of surreptitious eavesdropping as an investigative tool that the Legislature sought to prohibit."
""" [0]
The verbal exchange was recorded because it was incident to the shooting that triggered the recording. In addition to recording the shooter(s) shoot the victim(s), it also recorded the shooter(s) and victim(s) speak before the shooting. This was in public so there wasn't an expectation of privacy, and I can't imagine this is the kind of recording that the Massachusetts's legislature "sought to prohibit".
> These devices don't send audio over the network unless a shooting-like noise is detected.
How does this jibe with the fact that the police can apparently request a ShotSpotter operator review of audio recordings for up to 30 days if a known shooting is missed by the system? How does this jibe with the fact that the system has apparently at least twice recorded voice conversations that were used (and thrown out in one case due to violating wiretap laws) in court cases?
Edit and even if we trust ShotSpotter to do the right thing, how do we know their systems are secure enough to keep those recordings away from less-honorable actors?
The website says the audio buffer on the sensor holds the last 30 hours before overwriting. So police have less than 30 hours to flag a missed shooting so shotspotter staff can check sensors for audio around the shooting time.
Regarding the court cases, I literally included the entire linked info for one of those Court cases in the post you're responding to. For both of the cases, the verbal exchange was at the same time as the shooting. In the other case, the victim verbally identifies the person about to murder them just before they were murdered. The argument that shotspotter is a wiretap falls apart under the slightest bit of scrutiny and there's no way the legislature meant to protect the right of people to be free from audio recording while shooting someone in public.
It seems plainly to be a wiretap under the laws of my state (MA): anything “capable of transmitting, receiving, amplifying or recording a wire or oral communication.”
With some additional carve outs, none of which appear to apply to ShotSpotter.
I’m not arguing that properly scoped and protected ShotSpotter couldn’t be allowed by MA legislators, but it sure doesn’t fall apart under scrutiny by my reading of the law.
The court seems to agree in the text you copied in your upthread quote:
> However, the court found that the exchange was an “oral communication” and that the recording was a prohibited “interception” under the Massachusetts Wiretap Act because the defendant had no knowledge that the exchange was being recorded.
Which I think is the proper interpretation of the written law.
I’d also like to enjoy being free from audio recording while near loud noises. And the argument that it is a wiretap has apparently not fallen apart in at least one court of law.
And regarding your last question, the first of the two court cases where verbal communications were captured (because they were at the same time as the shooting), was in 2007. So shotspotter has been around for a while.
Have there been any breaches where less-honorable actors have managed to hack into sensors and exfiltrate data? Have there been any breaches of the actual recordings of shootings?
When I joined Google I started thinking to myself "geez, we care so much. why does HN hate us?"
It clicked for me when someone pointed out that even if you trust group of employees X with a mountain of data, nothing prevents group of Y from eventually selling it. And after what I saw my last couple years, I'm utterly convinced some McKinsey-ite will be telling 2050's CEO that's a great idea and in fact the moral option. Maximize shareholder value => stonk go up => Americans have safe retirements.
Why am I talking about Google?
People talk past eachother on this stuff. The problem isn't that "have they ever done anything bad?", it's that the incentive structure is set up such that something that crosses the line will eventually happen. They have an incentive to keep the customer happy.
And as the article, and comments below from 1.5 hours before you posted point out, there's no room for argument on that: this already happened. A court threw out info because it was illegally obtained.