ActiveSupport adds tons of great convenience methods to Ruby and you can require it even outside of Rails! blank, present; date and time conversions like 1.month.from_now; except; enumerable methods like pluck... It's just lovely
1. Almost all mobile devices used by adults to access their work email have provisioning profiles that allow trusted certificates to be installed by one’s employer.
2. Plenty of authoritarian counties require trusting CAs operated by the government. If you have users in those countries, they are vulnerable to snooping.
Your blog post makes it seem like users vulnerable to MITM attacks are in the minority, when in fact they are likely in the vast majority.
You tell a fun anecdotal narrative but in other cases those workers show up unannounced and threaten to separate families if they don't allow entrance even though of course that's unconstitutional. You can't reduce it to a single story here.
Thats it’s own mess, not relevant to parents point or anything having to do with corrupt private “auditors” turning a blind eye to migrant child labor.
When the FBI took down the FLDS church with hundreds of child brides that also had kids of their own. The FBI/Social services essentially threw up their hands and said this is too big a problem then handed the child brides and products of child r@pe back to the abusers to continue on. Watch the doc on netfix.
Child protective services (CPS) is mostly a joke all over the world. It all stems from the same problem. You have to spent 24x7 for 18 years raising the child if you take it from the parents. There simply are far more abuse cases happening then there are people willing to raise other peoples children. Making a child takes moments. The resources of raising for 18 years is an imbalance that cannot be reconciled.
This is true, by default Android apps do not trust user-installed certificate authorities. IMO the easiest solution if you're doing security testing on a dedicated device is MagiskTrustUserCerts[1]. If you're not testing on a dedicated device or you don't want to root the device, I'd recommend using the objection[2] tool which has a guided mode for patching an apk, and you can modify the manifest to add your CA or to trust all user-installed CAs.
I am not sure how this is getting so many upvotes, but the claims in the readme do not appear to be supported by evidence. Those claims are so extreme that they are simply not believable without much more evidence, rather than what is in the readme which appears to be mostly rambling or attempts to get readers to look at other projects of the author.
Visual Studio Code has built-in support for remote development. It runs a local agent on the remote server via SSH which does essentially whatever you would be doing locally (e.g. viewing, editing, searching) and only sends the minimal results you need over the network.
I agree with the principle, but the way these arguments have been summarized here has led to near-complete strawmanning. It's like the author started from the blog title and then came up with their own contextless, binary arguments.
Certifications: The typical arguments against security certifications are not that they "don’t represent the full spectrum of skills a professional needs" but instead that many of them teach outdated, useless, or actively negative practices. Then they're used as an advertising tool and organizations with less security expertise are told they must hire based on certifications rather than actual skill.
Compliance: "compliance is counterproductive for security." Most security practitioners don't necessarily like compliance primarily because it's not enjoyable for them. It distracts them from the tasks that they want to be working on. In most cases compliance is orthogonal to security. In some cases it can certainly be counterproductive (e.g. government compliance programs requiring outdated crypto).
Management: The typical refrain "management doesn't spend enough on security / take risks seriously" has been turned into "management doesn’t care about security because they don’t fund every single thing the security team asks for". I mean, it's obvious that the argument wasn't taken seriously by the author just based on how they wrote that.
I have been doing infosec consulting, appsec, penetration testing, threat modeling, risk asssesment, etc. for 15 years and that was my take on the article. It is a nice discussion piece but a little one sided. They kept erecting straw men that don’t really reflect the nuanced opinion of most of my peers. On Twitter and social media some luminaries are really prone to hot takes and it could be easy to assume that is reflective of the industry as a whole (and especially the authors opinion). Often it is neither.
One other vexing thing in this industry, is that it is very deep. You will often see folks with a deep background in say, reversing, come out with really strong opinions on some other topic such as phishing even though they are little more than observers to that aspect of infosec. Reversing doesn’t qualify you to be a CISO, etc. I just made my own straw man there, but it’s a truism in my opinion.
The core thrust of the article is reasonable though. Often we want an amazing solution or a big win when improving something even a little is a real improvement from a security perspective. A lot of little wins in an organization can really add up to changing its security culture, etc. I would ultimately agree the saying “perfect is the enemy of good” applies in the security world.
> Compliance: "compliance is counterproductive for security." Most security practitioners don't necessarily like compliance primarily because it's not enjoyable for them.
I have a B2B micro-ISV in the cyber security space, largely targeting a compliance niche - you get out what you put in.
I have customers that treat compliance as nothing more than a pointless burden; a series of boxes to be ticked, "check-box compliance" - all they want is to prove to their auditors that they are following the letter of the compliance standard. I imagine security consultants see this kind of thing a lot, and it's easy to see why they might view compliance negatively.
However, I also have customers that look past the letter of their compliance standards, and look towards the intent - these customers get a lot more out of it, and are actually increasing their security posture as their compliance standards intended.
> It's like the author started from the blog title and then came up with their own contextless, binary arguments.
Most of the arguments are actually quite common on Twitter’s Infosec communities. It’s common to read smug tweets dunking on certifications or security through obscurity or management similar to these strawman arguments in the article.
Not coincidentally, Twitter isn’t a great place to get good infosec advice. It’s too focused on calling out less-than-perfect solutions from a safe distance rather than actually examining practical security in the real world. This article makes a good point of showing the difference and would be useful for newcomers who might be confused.
Counterpoint: Security admins are overwhelmed with process and tasks to keep the trains running, and perceive they don't have time to go back and clean up bad configs. If something isn't done right the first go-round, it will never be right.
Compliance is the bludgeon that says "go make this right". Then security admins bitch about not having the time, and we say we don't have enough people in the industry.
Automate the boring stuff. We do have a shortage - a shortage of people who are creative enough and talented enough to script their toil away.
I work in a team of 100+ cyber professionals, and consume the typical infosec content that’s out there. None of the authors that I know, or any of my peers argue in this presumed way. Additionally, as everyone in cyber knows: every answer to any question should start with “it depends”. That’s also how I experience knowledge exchange between peers most of the time.
Great comment which I upvoted for accuracy because it is how the real professionals in the industry talk.
A great example of this is the debate around fail-open and fail-closed in different scenarios.
Depending on the system, the function, the security objectives underying it, and the way in which success or failure is determined, eventually, a decision can be reached about what is optimal for an organization in a particular case.
It is completely consistant to argue for fail-closed for a low availability requiring system with a big attack surface that is internet facing, while simultaneously proffering fail-open for a mission-critical industrial control system with strong physical protections that is in a locked-down closed off environment, unpivotable, for which work stoppage is a serious threat. Basically, something unlike Colonial Energy..... :)
Signal could add app-level encryption, but who would this serve? Signal can't do anything better than what the OS/hardware provides in terms of encryption. Even if they let you specify your own signal-specific password/encryption key:
* Non-technical users either won't use it, or will use a weak key
* Technical users are better served by making sure their device is secure and hard-locked with a strong passcode (tip: 5 presses of the lock button on iPhone wipes in-memory encryption keys, essentially exiting "AFU mode")
> (tip: 5 presses of the lock button on iPhone wipes in-memory encryption keys, essentially exiting "AFU mode")
Is this the same thing as holding down the lock button and one of the volume buttons on one of the newer iPhones? I'm referring to this doc: https://support.apple.com/en-us/HT208076
Yes, it's basically a side effect of activating Emergency SOS. The five-press shortcut works on all iPhones as far as I'm aware. As the doc says:
"If you use the Emergency SOS shortcut, you need to enter your passcode to re-enable Touch ID, even if you don't complete a call to emergency services. "
I have an iPhone X and I have it set to not use FaceID for unlocking the phone itself.
But I temporarily enabled it now to test. Maybe I am pressing the power button wrong but rapidly pressing it five times does not prevent it from allowing FaceID to unlock the phone. Whereas power plus volume up button does indeed.
Btw, when I normally have FaceID disabled from unlocking the phone, does it wipe in-memory encryption keys when locked with a single touch to the power button or not? I was assuming that it did, but I realized now that this assumption might not be correct.