Looks like CyberNews have edited the article with more info since first I saw it, it used to look quite suspicious and untrustworthy, it now has more info. Still doesn't say exactly what a record is, or how many uniques there are.
I presume the database exists, but some of the details don't add up. IDMerit say "IDMERIT’s systems and security infrastructure have never been compromised", "there has never been a data breach or exfiltration from [our partners'] systems during, before, or after this event" and "IDMerit does not own, control or store customer data". But Cybernews says that they "promptly secured the database" after being notified. Cybernews also didn't give the reason why they thought this was to do with IDMerit (unless I missed it). I can't quite make head nor tail of it.
It's a weird article. For one, the researcher says "they believe" the data belongs to IDMerit but apparently aren't sure. IDMerit denies it's the owner of the data nor is it any of their partners. And there's very few details about where or how they found this database. It's possibly some kind of hoax or ransom attempt? Or there's really just billions of unaccounted databases of private data just sitting all over the Internet.
The cybernews article does have some screenshots showing names like “idmb2c” … also that IDMerit was contacted in November and the ports were closed a day later.
- IDMerit asked the security researcher for proof, the researcher asked for money first, so IDMerit balked
- IDMerit basically says they have no proof they were hacked, so they weren't
- The researcher is a freelancer... for CyberNews...
Even if somebody followed up with IDMerit, it's likely they will say they are not affected. The security researcher is probably the only person who could prove whether they were or not vulnerable, at this point. If they don't come forward, we can only assume they weren't vulnerable, but we don't know. This is a good lesson for responsible disclosure in the future.
...also, this is yet another example of why we need a regulated Software Building Code, with penalties for not conforming to it. If somebody is found to be hosting a public Mongo instance with no authentication, it should be reported to a state or federal agency, so that real penalties can be applied, the way they are for other code violations. And they shouldn't have been allowed to launch with that in the first place. It shouldn't be up to random "security researchers" to police businesses.
So this is a good thing even for coreutils itself, they will slowly find all of these untested bits and specify behaviour more clearly and add tests (hopefully).
Man, if I had a nickel every time some old Linux utility ignored a command-line flag I'd have a lot of nickels. I'd have even more nickels if I got one each time some utility parsed command-line flags wrong.
I have automated a lot of things executing other utilities as a subprocess and it's absolutely crazy how many utilities handle CLI flags just seemingly correct, but not really.
This doesn't look like a bug, that is, something overlooked in the logic. This seems like a deliberately introduced regression. Accepting an option and ignoring it is a deliberate action, and not crashing with an error message when an unsupported option is passed must be a deliberate, and wrong, decision.
It certainly doesn't look intentional to me- it looks like at some point someone added "-r" as a valid option, but until this surfaced as a bug, no one actually implemented anything for it (and the logic happens to fall through to using the current date).
It's wrong (and coreutils get it right) but I don't see why it would have to be deliberate. It could easily just not occur to someone that the code needs to be tested with invalid options, or that it needs to handle invalid options by aborting rather than ignoring. (That in turn would depend on the crate they're using for argument parsing, I imagine.)
Could parsing the `-r` be added without noticing it somehow?
If it was added in bulk, with many other still unsupported option names, why does the program not crash loudly if any such option is used?
A fencepost error is a bug. A double-free is a bug. Accepting an unsupported option and silently ignoring it is not, it takes a deliberate and obviously wrong action.
At least from what I can find, here's the original version of the changed snippet [0]:
let date_source = if let Some(date) = matches.value_of(OPT_DATE) {
DateSource::Custom(date.into())
} else if let Some(file) = matches.value_of(OPT_FILE) {
DateSource::File(file.into())
} else {
DateSource::Now
};
And after `-r` support was added (among other changes) [1]:
let date_source = if let Some(date) = matches.get_one::<String>(OPT_DATE) {
DateSource::Human(date.into())
} else if let Some(file) = matches.get_one::<String>(OPT_FILE) {
match file.as_ref() {
"-" => DateSource::Stdin,
_ => DateSource::File(file.into()),
}
} else if let Some(file) = matches.get_one::<String>(OPT_REFERENCE) {
DateSource::FileMtime(file.into())
} else {
DateSource::Now
};
Still the same fallback. Not sure one can discern from just looking at the code (and without knowing more about the context, in my case) whether the choice of fallback was intentional and handling the flag was forgotten about.
> Accepting an unsupported option and silently ignoring it is not, it takes a deliberate and obviously wrong action.
No, it doesn't. For example, you could have code that recognizes that something "is an option", and silently discards anything that isn't on the recognized list.
I would say that Canonical is more at fault in this case.
I'm frankly appalled that an essential feature such as system updates didn't have an automated test that would catch this issue immediately after uutils was integrated.
Nevermind the fact that this entire replacement of coreutils is done purely out of financial and political rather than technical reasons, and that they're willing to treat their users as guinea pigs. Despicable.
What surprises me is that the job seems rushed. Implementation is incomplete. Testing seems patchy. Things are released seemingly in a hurry, as if meeting a particular deadline was more important for the engineers or managers of a particular department than the qualify of the product as a whole.
This feels like a large corporation, in the bad sense.
Yes, it was basically gone for a decade or more. There’s no shared code. Though I’m sure they may have looked at the old code for inspiration for some of the Win32 stuff.
Besides the ecosystem issues, for the phishing part, I'll repost what I responded somewhere in the other related post, for awareness
---
I figure you aren't about to get fooled by phishing anytime soon, but based on some of your remarks and remarks of others, a PSA:
TRUSTING YOUR OWN SENSES to "check" that a domain is right, or an email is right, or the wording has some urgency or whatever is BOUND TO FAIL often enough.
I don't understand how most of the anti-phishing advice focuses on that, it's useless to borderline counter-productive.
What really helps against phishing :
1. NEVER EVER login from an email link. EVER. There are enough legit and phishing emails asking you to do this that it's basically impossible to tell one from the other. The only way to win is to not try.
2. U2F/Webauthn key as second factor is phishing-proof. TOTP is not.
That is all there is. Any other method, any other "indicator" helps but is error-prone, which means someone somewhere will get phished eventually. Particularly if stressed, tired, or in a hurry. It just happened to be you this time.
> 1. NEVER EVER login from an email link. EVER. There are enough legit and phishing emails asking you to do this that it's basically impossible to tell one from the other. The only way to win is to not try.
Sites choosing to replace password login with initiating the login process and then clicking a "magic link" in your email client is awful for developing good habits here, or for giving good general advice.
:c
In that case it's the same as a reset-password flow.
In both cases it's good advice not to click the link unless you initiated the request. But with the auth token in the link, you don't need to login again, so the advice is still the same: don't login from a link in your email; clicking links is ok.
Clicking links from an email is still a bad idea in general because of at least two reasons:
1. If a target website (say important.com) sends poorly-configured CORS headers and has poorly configured cookies (I think), a 3rd-party website is able to send requests to important.com with the cookies of the user, if they're logged in there. This depends on important.com having done something wrong, but the result is as powerful as getting a password from the user. (This is called cross-site request forgery, CSRF.)
2. They might have a browser zero-day and get code execution access to your machine.
If you initiated the process that sent that email and the timing matches, and there's no other way than opening the link, that's that. But clicking links in emails is overall risky.
1 is true, but this applies to all websites you visit (and their ads, supply chain, etc). Drawing a security boundary here means never executing attacker-controlled Javascript. Good luck!
2 is also true. But also, a zero day like that is a massive deal. That's the kind of exploit you can probably sell to some 3 letter agency for a bag. Worry about this if you're an extremely high-value target, the rest of us can sleep easy.
I watched a presentation from Stripe internal eng that was given I forget where.
An internal engineer there who did a bunch of security work phished like half of her own company (testing, obviously). Her conclusion, in a really well-done talk, was that it was impossible. No human measures will reduce it given her success at a very disciplined, highly security conscious place.
The only thing that works is yubikeys which prevent this type of credential + 2fa theft phishing attack.
> At Stripe, rather than focusing on mitigating more basic attacks with phishing training, we decided to invest our time in preventing credential phishing entirely. We did this using a combination of Single Sign On (SSO), SSL client certificates, and Universal Second Factor
(U2F)
I receive Google Doc links periodically via email; fortunately they're almost never important enough for me to actually log in and see what's behind them.
My point, though, is that there's no real alternative when someone sends you a doc link. Either you follow the link or you have to reach out to them and ask for some alternative distribution channel.
(Or, I suppose, leave yourself logged into the platform all the time, but I try to avoid being logged into Google.)
I don't know what to do about that situation in general.
A Firefox plugin/feature, probably also available on other browsers as well. It is useful for siloing cookies, so you can easily be logged into Google on one set of browser tabs and block their cookies on another.
As for any of these cases, we do receive legitimate emails that require being logged in, Google or otherwise
The answer is simple: use your bookmarks/password manager/... to login yourself with a URL you control in another tab and come back to the email to click it
(and if it still asks for a login then, of course still don't do it)
A browser-integrated password manager is only phishing-proof if it's 100% reliable. If it ever fails to detect a credential field, it trains users that they sometimes need to work around this problem by copy-pasting the credential from the password manager UI, and then phishers can exploit that. AFAIK all existing password manager extensions have this problem, as do all browsers' native password-management features.
It doesnt need to be 100% reliable, just reliable enough.
If certain websites fail to be detected, thats a security issue on those specific websites, as I'll learn which ones tend to fail.
If they rarely fail to detect in general, its infrequent enough to be diligent in those specific cases. In my experience with password managers, they rarely fail to detect fields. If anything, they over detect fields.
I think it's more appropriate to say TOTP /is (nearly)/ phishing-proof if you use a password manager integrated with the browser (not that it /doesn't need to be/ phishing-proof)
> U2F/Webauthn key as second factor is phishing-proof. TOTP is not.
Last I checked, we're still in a world where the large majority of people with important online accounts (like, say, at their bank, where they might not have the option to disable online banking entirely) wouldn't be able to tell you what any of those things are, and don't have the option to use anything but SMS-based TOTP for most online services and maybe "app"-based (maybe even a desktop program in rare cases!) TOTP for most of the rest. If they even have 2FA at all.
This is the point of the "passkey" branding. The idea is to get to the point where these alphabet-soup acronyms are no longer exposed to normal users and instead they're just like "oh, I have to set up a passkey to log into this website", the way they currently understand having to set up a password.
Sure. That still doesn't make Yubikey-style physical devices (or desktop keyring systems that work the same way) viable for everyone, everywhere, though.
Yeah, the pressure needs to be put on vendors to accept passkeys everywhere (and to the extent that there are technical obstacles to this, they need to be aggressively remediated); we're not yet at the point where user education is the bottleneck.
At least the crowd here should _know_ that TOTP doesn't do anything against phishing, and most of the critical infrastructure for code and other things support U2F so people should use it.
Urgency is also either phishing (log in now or we'll lock you out of your account in 24 hours) or marketing (take advantage of this promotion! expires in 24 hours!).
A guy I knew needed a car, found one, I told him to take it to a mechanic first. Later he said he couldn't, the guy had another offer, so he had to buy it right now!!!, or lose the car.
I mean, real deadlines do exist. The better heuristic is that, if a message seems to be deliberately trying to spur you into immediate action through fear of missing a deadline, it's probably some kind of trick. In this respect, the phishing message that was used here was brilliantly executed; it calmly, without using panic-inducing language, explains that action is required and that there's a deadline (that doesn't appear artificially short but in fact is coming up soon), in a way quite similar to what a legitimate action-required email would look like. Even a savvy user is likely to think "oh, I didn't realize the deadline was that soon, I must have just not paid attention to the earlier emails about it".
Yeah, this particular situation's a bit weird because it's asking the user to do something (rotate their 2FA secret) that in real life is not really a thing; I'm not sure what to think of it. But you could imagine something similar like "we want you to set up 2FA for the first time" or "we want you to supply additional personal information that the government has started making us collect", where the site might have to disable some kind of account functionality (though probably not a complete lockout) for users who don't do the thing in time.
I had someone from a bank call me and ask for my SSN to confirm my identity. The caller ended up being legitimate, but I still didn't give it...like, are you kidding me?
This has happened to me more times than I can count, and it's extremely frustrating because it teaches people the wrong lesson. The worst part is they often get defensive when you refuse to cooperate, which just makes the whole thing unnecessarily more stressful.
Is there somewhere you'd recommend that I can read more about the pros/cons of TOTP? These authenticator apps are the most common 2FA second factor that I encounter, so I'd like to have a good source for info to stay safe.
1- As a professional, installing free dependencies to save on working time.
There's no such thing as a free lunch, you can't have your cake and eat it too that is, download dependencies that solve your problems, without paying, without ads, without propaganda (for example to lure you into maintaining such projects for THE CAUSE), without vendor lockin or without malware.
It's really silly to want to pile up mountains of super secure technology like webauthn, when the solution is just to stop downloading random code from the internet.
I agree that #1 is correct, and I try to practice this; and always for anything security related (update your password, update your 2FA, etc).
Still, I don’t understand how npmjs.help doesn’t immediately trigger red flags… it’s the perfect stereotype of an obvious scam domain. Maybe falling just short of npmjshelp.nigerianprince.net.
should practice it for ENTER your password, ENTER your 2FA ;)
> Still, I don’t understand how npmjs.help doesn’t immediately trigger red flags
1. it probably did for quite a few recipients, but that's never going to be 100%
2. not helped by the current practices of the industry in general, many domains in use, hard sometimes to know if it's legit or not (some actors are worse in this regard than others)
Either way, someone somewhere won't pay enough attention because they're tired, or stressed out, or they are just going through 100 emails, etc.
Most mail providers have something like plus addressing. Properly used that already eliminates a lot of phishing attempts: If I get a mail I need to reset something for foobar, but it is not addressed to me-foobar (or me+foobar) I already know it is fraudulent. That covers roughly 99% of phishing attempts for me.
The rest is handled by preferring plain text over HTML, and if some moron only sends HTML mails to carefully dissect it first. Allowing HTML mails was one of the biggest mistakes for HTML we've ever made - zero benefits with huge attack surface.
Still would have done nothing in this case, as they pulled the correct email address he uses for npm from another source (public API I think?).
That's exactly why I said all the other "helpful" recommendations and warning signs people are using are never foolproof, and thus mostly useless given the scale at which phishing campaigns operate.
Great if it helps you in the general case, terrible if it lulls you into a sense of confidence when it's actually a phishing email using the right email address.
reply