Consider something like a Whoop or Oura Ring which monitors health metrics but doesn’t rely on a watch? That’s what I’ve settled on so that I have the best of both worlds.
Wow. I had looked at the Oura Ring before and thought it looked cool, but I missed that it basically requires a subscription, which is wild considering it seems like I get all of the same metrics from my Garmin watch with no subscription required.
There are definitely other approaches that don't require code to be uploaded anywhere. For example, we (https://rezilion.com) work with your package managers to understand what dependencies your program has, and then analyze that metadata on the back end. Net result is still to be able to see what vulnerabilities are truly exploitable and which are not.
Gitlab Ultimate uses Rezilion to accomplish a similar aim. Rather than using the principle of "reachability", Rezilion analyzes at runtime what functions and classes are loaded to memory. Much more deterministic and less of a guess about what code will be called.
How does it do that in the face of lazy loading, or for languages in which "what functions and classes are loaded in to memory" is not really a thing (e.g. C)?
Shouldn't this be very easy in C? With static linking, you're vulnerable if you're linking the package. With dynamic linking, you're vulnerable if you're importing the specific functions. Otherwise, you're not vulnerable - there's no other legal way to call a function in C.
Now, if you're memory mapping some file and jumping into it to call that function, good luck. You're already well into undefined behavior territory.
Now, for lazy loading, I'm assuming the answer is the same as any other runtime path analysis tool: it's up to you to make sure all relevant code paths are actually running during the analysis. Presumably your tests should be written in such a way as to trigger the loading of all dependencies.
I think there's really no other reasonable way to handle this, though I can't say I've worked with either GutHub Ultimate or Rezilion, so maybe I'm missing something.
Hey, I work on OP's product, and just wanted to mention that reachability is not always about a function being called. Sometimes insecure behavior is triggered by setting options to a certain value[0]. Other times it's feasible to mark usages of an insecure function as safe when we know that the passed argument comes from a trusted source[1]. The Semgrep rules we write understand these nuances instead of just flagging function calls.
Rezilion works at runtime when the Gitlab runner spins up a container for testing the app. Rezilion observes the contents of memory and can reverse-engineer back to the filesystem to see where everything was loaded from.
In the CI pipeline this depends on your tests exercising the app, but when you deploy Rezilion into a longer-lived environment like Stage or Prod then you may get some new code pathways that are used, although most find that the results aren't surprisingly different between all of the environments.
Ah, thank you. It's not entirely clear whether this is something baked into Gitlab Ultimates SAST CI/CD feature/template, or if it's a third party that I would have to license first. Do you happen to know?
>Unfortunately, no technology currently exists that can tell you whether a method is definitively not called, and even if it is not called currently, it’s just one code change away from being called. This means that reachability should never be used as an excuse to completely ignore a vulnerability, but rather reachability of a vulnerability should be just one component of a more holistic approach to assessing risk that also takes into account the application context and severity of the vulnerability.
Err, "no technology currently exists" is wrong, "no technology can possibly exist" to say whether something if definitively called.
It's an undecidable problem in any of the top programming languages, and some of the sub problems (like aliasing) themselves are similarly statically undecidable in any meaningful programming language.
You can choose between over-approximation or under-approximation.
Shameless plug: my company has launched an alternative authentication modality that eliminates this issue, yet is also app-less and requires no download. It is also not limited in the ways FIDO authenticators are today, yet provides the same level of security because it can actually leverage FIDO authenticators if needed. Soft launch was last week. Email in my profile if this piques your interest and I can send more info directly.
It's really unclear to me what your product is. I would have given up trying to figure out if you hadn't implied that it's somehow related to what this cool blog post accomplishes.
From the FAQ, my best guess is that you offer businesses bank accounts that come bundled with the reporting you'd get with something like Mint/Quickbooks?
It's OK to basically be two services glued together, but why are you better than just the two existing services, were they to be glued together?
I clicked thinking it was a sci-fi piece about a future where historians inspect the remains of our current society. Amazon had grown to be so significant that the future generations regarded this whole period and all our cities as Amazon
At one place I worked, our commercial (wordpress) site got hacked and defaced by some Turkish outfit. The devs at our company reverted the site, and were joking about the hackers just being script kiddies. They didn't seem to understand my point of "... but we were hacked by those script kiddies, why are we laughing?"
> If I shine my high beams on your car, or have a cellular phone call next to well - too bad. I'm entitled to bombard you with radiation if I want to.
I think this is a really good point, actually. Do all these electronic devices need to be FCC approved? And if so, how do these vans' dosage gybe with any FCC regulations?
That's equivocation at its best. Candles do not emit ionizing radiation in measurable quantities. X-Ray machines do. There's a huge distinction, similar to the distinction we make between shining an incandescent bulb up at a helicopter or passenger plane, and shining a laser pointer.
So we regulate X-Ray machines keep the radiation leak below the level that would oblige to ask people in the area for consent. GP has a point. The concern in this thread is based entirely on the notion that "radiation" is a scary-sounding thing.
No, the concern in this thread is based entirely on the notion that innocent people are potentially being exposed to X-rays, without their knowledge or consent.
You're arguing against a strawman. HN commenters aren't idiots. The technology in question is using ionizing X-rays, which can have harmful health effects.
I never assume HN commenters are idiots. That's the very reason I hang out here - I learn a lot from what people say in this place.
Having said that, this thread is dominated by commenters who refuse to do the math. X-rays sound scary but they aren't death rays; if you add up the numbers you'll see that it's not even worth to talk about them here. There's higher probability they'll harm someone by introducing more cars onto streets.
What if there is a machine malfunction? Lack of proper maintenance? Operator error? Sure there should be fail-safes, but accidents happen.
Bouncing around in the back of a van, something goes wrong. Instead of a rapidly moving beam of x-rays that doesn't stay in one location long enough to cause problems, you get a focused beam.
Those are fair concerns. I haven't seen the estimate for maximum amount of radiation the X-ray machine could generate when malfunctioning. We need those to see whether or not there is something to worry about.
If it works anything like the backscatter machines the TSA was using (rapidly moving beam), a failure mode like what I mentioned is definitely enough to be dangerous. There was a physics professor from Arizona state who said that the TSA machines could even cause radiation burns if the fail-safes failed and the beam kept going while stuck in one position.
It many not work that way though. Who knows? That's the problem with being secretive about the whole thing. They don't publicly release data on potential malfunctions.
One of my biggest concerns is the amount of training the operators receive on recognizing and reporting potential malfunctions. I suspect it's not sufficient, but again who knows? They won't talk about it because terrorists.
I agree, those are causes for concern. It doesn't matter if they're using radiation - whether they'd be using chemicals, biological agents or voodoo, as the issues of proper training and secrecy around a device that fails dangerously is a serious matter. They should be opposed as everything from waste of tax money to being a public safety issue, just not because of the radiation per se.
I disagree. I think it is worth it to talk about them, for three main reasons:
1. Even if this machine only gives off radiation equivalent to, say, a dental X-ray, I still did not give my consent for it, whereas I did give my consent for the dental X-ray. Consent is important. There are a lot of things that are OK with someone's consent, but not OK otherwise, and we don't have the right to make these decisions for someone.
2. Medical X-rays have positive expected trade-offs for receiving the dose of radiation. Compare this to an X-ray police van than I happen to walk past; there's not really any argument that can be made for why my exposure to this X-ray radiation is worth it for me.
3. The machine can malfunction, or be used improperly, and give off substantially more X-ray radiation than expected or designed. Safeguards are in place for other uses of X-ray to protect against this, e.g. you wear a lead apron to protect the rest of your body during a dental X-ray, and the assistant taking the X-ray leaves the room entirely. Safeguards of these sort aren't possible with the presence of an unknowing public.
There's also of course the entire privacy aspect that we haven't even delved into yet, but the health issues alone are concerning.
So, in any case, the FCC has set limits for acceptable amounts of radiation exposure from certain classes of the electronics that we find all around us every day, and I guess I was just wondering how those limits would compare to the radiation dosage of these vans.
It might not be approved by the FCC, but if it's equivalent to something that the FCC would allow you to carry around in your pocket anyway, it's hard to justify being upset these vans driving around from the perspective of being exposed unawares to their radiation.
There are a number of other reasons to be upset about such vans, though.
So why don't you try the laser experiment. Use a 1 milliwatt laser. That's _less_ energy than the light bouncing from your skin. That's nothing. And it's non ionizing radiation. So it's totally safe. Point it right in your eye. See what happens. By the completely over simplified thinking that you are undertaking here you will be fine. Right?
I meant that light bouncing from skin to eye is a proper analogy for banana dose.
But since you challenge me - I shined the usual ~5mW red laser pointers many times straight into the eye, and had strangers shine them at me as well. It's annoying, but it doesn't hurt. My vision is ok. I'm totally fine.