> Tor does it by being so painfully slow an unreliable
I do 95% of my web browsing via Tor Browser and it is very tolerable, most circuits are fast enough for 1080p video (Youtube, Twitch livestreams, etc) without any buffering.
Of course this is a single tor circuit with an exit node, so speeds are slower when going directly to .onion sites, but the only real slowness comes from the latency and not throughput.
The camera isn't the part doing that verification. The google service serving that "reCAPTCHA" is what's doing that validation. Unless you're using a custom browser that is reporting a different domain to google than the one requesting the reCAPTCHA, google's service will know which domain is which.
It would be generated by some other website like Amazon. Because I own, say, Meta, I copy these Amazon-generated codes over to Meta, make people scan them on their phones to sign into Meta and then pass the solution back to Amazon so my bots can sign into Amazon.
We don't yet know how the client side works, perhaps there will be a decompilation posted soon.
It's possible this scenario is acceptable to them because it means they can still tie your access to something that's easier to ban without requiring a full account login.
What are you implying? That it will become ineffective due to that?
That's possible... and they might change their mind if so, we will see.
I feel like it's a similar issue to when scrapers pretend to be an allowed-origin webpage in order to abuse "public" API keys for web services.
They could also require the mobile device to interact with the requesting webpage in some manner, similar to mutual PIN/codes for Bluetooth/TV pairing these days. That way bulk sharing of the codes would still require active participation from the device that requested it in the first place, likely with a short time limit.
Don't you wanna level up your career to become an architect? You can draw a box, call it "User Management" and slap "Clerk" or some other SaaS on it, and assume it's managed for you. This allows you to shove whatever requirements you want in that magic blackbox as you feel "it doesn't bring value" for you to implement.
But think of how comfortable and productive the monkey will feel. It might not be that hard to just build temp housing for it while you have monkey business to do.
The cope of some people is insane. Why even have UID:GID? All you need is 0:0. I always tell people to run everything as root because there is literally no point.
Well, there's still value in users and namespaces! Just, it's not a strong security boundary.
Also even if it's not strong, it doesn't mean it's entirely worthless. You can't rely on it, but it's usually free and it still buys you time / increases attack cost.
Like, if you leave 100k cash in a car on the street in SF, that's dumb. If you really need to do that for some strange reason, you should hire a security guard to watch your car, because cars a not a good security boundary. BUT, that doesn't mean you would leave the car unlocked just coz someone's watching it!
6 minutes later an HN submission "GitHub blocks your account if you mention X in the README" with a top comment "This is absurd, are they just doing regex matching to check for malware?"
Yeah, and ultimately no body cares. Everyone assumes it’s just some process miss, and we need to add another step to the process and move on. Fuck ups that would have killed the credibility of projects 10 years ago are now treated as “eeh what are you gonna do. Sometimes you ship malware. Will look into it”
Some of us are very aware and concerned about the risk. But like Cassandra from Greek mythology, we see the coming disaster and feel powerless to stop it.
Personally I was sus of Tailscale, but stopped even pondering it after they got an RCE.
Same with npm and large dependency trees with 10.5 line libraries of low quality.
Lighting always seemed to be the leftpad of PyTorch. It was basically a replacement for a for loop and a couple of backward/step calls. I'm sure now it grew to replace a few more lines of code though. Like maybe a 100.
If you want to look for a coming disaster, look no further than HuggingFace libraries that for some reason quite a lot of projects use these days, especially transformers package. Sadly even vllm depends on it.
More like hiding their heads in the sand in circumstances that are outside of their ability to fix. None of the tooling or practices out there push you in the direction of not being at risk, or even provide you with easy ways to stay completely safe: no external packages needed to develop software with everything you NEED being provided out of the box, or a flow where pulling in a new package makes you review all of its source code line by line and compile everything instead of any binary tooling blobs, or built in vulnerability and configuration scanning so you don't get pwned by Trivy or don't leave an open S3 bucket somewhere, which also means that obviously you'd need thorough observability and alerting for any of the cloud stuff you do.
And even when they exist, your org projects might be painfully out of date, too much to use those approaches, or the org culture might not be there, or any number of other issues I can't even imagine. On one hand, people are running out of date software and those have CVEs, on the other using dependencies that are too new also puts you at risks of compromised packages - it's like we're being squeezed by rocks on both sides in a landslide or something. Even at the OS level, the fact that everyone is not running something like Qubes OS or regular VMs for development is absolutely insane. The fact that all software isn't sandboxed and that desktop OSes don't prompt for permissions like mobile apps do is absolutely insane. That we don't have firewalls like Glasswire as standard that prompt you for external connections, or don't allow easily blocking what you don't trust is insane.
Despite lots of people trying their best, on some level, everything both up and down the stack is absolutely fucked for a variety of complex reasons. You'd have to largely tear it all down and rebuild everything starting with your OS kernel in a memory safe language and formal proofs and thorough testing for everything (if it took SQLite as long as it did to get a decent test suite, it might as well take on the order of decades to do it for a production OS kernel and drivers), then do the same for all userland software and DBs and tooling and dependency management and secrets management (not just random files, special hardware most likely) and so on. It's not happening, so we just build towers of cards.
Are you talking about open source or commercial products?
I can't speak for the pytorch lighting case, but I wouldn't be surprised if the maintainers didn't get any $ from it. They would be sad if the credibility of the package suffers, but ultimately it wouldn't make a big difference to them
reply