Hacker Newsnew | past | comments | ask | show | jobs | submit | more apimade's commentslogin

In the context here, I believe they mean someone can _use_ the information externally. Yes, we rely on external vendors for business critical operations like email, or our cell phone providers for customer communications. That does not mean we’d usually grant them access to view, distribute or make use of in any way the content of those interactions without external governance requirements (court order, legal discovery, compliance/regulatory requirements).

For example in the US, a court could compel any external vendor we outsource our data to for operational, processing or storage purposes to access that data for any use it deems necessary. This was the expectation of any commercial business relationship across the country.

Gmail broke this with ad data, but the data was meta-data - information about information, like ad groups and categories relevant to your org.

Now publicly published content is being ingested and used to train models, at this stage I would assume if you share Docs publicly that would be in scope.

Proton has a good article explaining what is and isn’t ingested, and what they have the right to use: https://proton.me/blog/google-docs-ai-scraping


Is this really a criticism? Because this has been the case forever with all security and SIEM tools. It’s one of the reasons why the SIEM is the most locked down pieces of software in the business.

Realistically, secrets alone shouldn’t allow an attacker access - they should need access to infrastructure or a certificates in machines as well. But unfortunately that’s not the case for many SaaS vendors.


If my security software exfiltrates my secrets by design, I’m just going to give up on keeping anything secure now.


Ideally secrets never leave secure enclaves and humans at the organization can't even access them.

It's totally insane to send them to a remote service controlled by another organization.


Essentially, it’s straddling two extremes:

1) employees are trusted with secrets, so we have to audit that employees are treating those secrets securely (via tracking, monitoring, etc)

2) we don’t allow employees to have access to secrets whatsoever, therefore we don’t need any auditing or monitoring


employees are trusted with secrets, so we have to audit that employees are treating those secrets securely

IMHO needing to be monitored constantly is not being "trusted" by any sense of the word.


I can trust you enough to let you borrow my car and not crash it, but still want to know where my car is with an Airtag.

Similarly employees can be trusted enough with access to prod, while the company wants to protect itself from someone getting phished or from running the wrong "curl | bash" command, so the company doesn't get pwned.


Exporting to a SIEM does not correlate to either of those extremes. It’s stupidity and makes auditing worse


SIEM = Security Information & Event Management

Factually, it is necessary for auditing and absolutely correlates with the extreme of needing to monitor the “usage” of “secrets”.

In a highly auditable/“secure” environment, you can’t give secrets to employees with no tracking of when the secrets are used.


That's far from factual and you are making things up. You don't need to send the actual keys to a siem service to monitor the usage of those secrets. You can use a cryptographic hash and send the hash instead. And they definitely don't need to dump env values and send them all.

Sending env vars of all your employees to one place doesn't improve anything. In fact, one can argue the company is now more vulnerable.

It feels like a decision made by a clueless school principle, instead of a security expert.


A secure environment doesn't involve software exfiltrating secrets to a 3rd party. It shouldn't even centralize secrets in plaintext. The thing to collect and monitor is behavior: so-and-so logged into a dashboard using credentials user+passhash and spun up a server which connected to X Y and Z over ports whatever... And those monitored barriers should be integral to an architecture, such that every behavior in need of auditing is provably recorded.

If you lean in the direction of keylogging all your employees, that's not only lazy but ineffective on account of the unnecessary noise collected, and it's counterproductive in that it creates a juicy central target that you can hardly trust anyone with. Good auditing is minimally useful to an adversary, IMO.


> In a highly auditable/“secure” environment, you can’t give secrets to employees with no tracking of when the secrets are used.

This does not seem to require regularly exporting secrets form the employee's machines though. Which is the main complaint I am reading. You would log when the secret is used to access something, presumably remote to the users machine.


I’m well aware of what a SIEM does. You do not need to log a plaintext secret to know what the principal is doing with it. In a highly auditable environment (your words) this is a disaster


In a highly secure environment, don't use long lived secrets in the first place. You use 2FA and only give out short lived tokens. The IdP (ID Provider) refreshing the token for you provides the audit trail.

Repeat after me: Security is not a bolt on tool.


More like a triple lock steel core reinforced door laying on its side in an open field?

Good start, might need a little more work around the edges.


> In a highly auditable/“secure” environment, you can’t give secrets to employees with no tracking of when the secrets are used.

Yeah. So you track them when they are used (which also gives you a nice timestamp). Not when they’re just sitting in the env.


You give employees the ability to use the secrets, and that usage is tracked and audited.

It works the same way for biometrics like face unlock on mobile phones


> Ideally secrets never leave secure enclaves and humans at the organization can't even access them.

Right, but doesn't that mean there is no risk from sending employee laptop ENV variables, since they shouldn't have any secrets on their laptops?


I mean it's right there in the name. They're not really secrets any longer if you're sharing them in plaintext with another company.


Keeping secrets and other sensitive data out of your SIEM is a very important part of SIEM design. Depending on what you’re dealing with you might want to tokenize it, or redact it, but you absolutely don’t want to don’t want to just ingest them in plaintext.

If you’re a PCI company then ending up with a credit card number in your SIEM can be a massive disaster. Because you’re never allowed to store that in plaintext, and your SIEM data is supposed to be immutable. In theory that puts you out of compliance for a minimum of one year with no way to fix it, in reality your QSAs will spend some time debating what to do about it and then require you to figure out some way to delete it, which might be incredibly onerous. But I have no idea what they’d do if your SIEM somehow became full of credit card numbers, that probably is unfixable…


> But I have no idea what they’d do if your SIEM somehow became full of credit card numbers, that probably is unfixable…

You'd get rid of it.


If that’s straightforward then congratulations, you’ve failed your assessment for not having immutable log retention.

They certainly wouldn’t let you keep it there, but if your SIEM was absolutely full of cardholder data, I imagine they’d require you to extract ALL of it, redact the cardholder data, and the import it to a new instance, nuking the old one. But for a QSA to sign off on that they’d be expecting to see a lot of evidence that removing the cardholder data was the only thing you changed.


> Realistically, secrets alone shouldn’t allow an attacker access - they should need access to infrastructure or a certificates in machines as well.

This isn't realistic, it's idealistic. In the real world secrets are enough to grant access, and even if they weren't, exposing one half of the equation in clear text by design is still really bad for security.

Two factor auth with one factor known to be compromised is actually only one factor. The same applies here.


But why only forced on MacOS?

I think some configurability would be great. I would like to provide an allow list or the ability to redact. Or exclude specific host groups.

We all have different levels of acceptable risk


Conspiracy theory time. Because Apple is the only OS company that has reliably proven that it won't decrypt hard drives at government request.


It depends on the country it is in, it rejects the US government's request. But it fully complies with any request from the Chinese government


The venn diagram of users who don't want the government to access their data and crowdstrike customers is two circles in different galaxies.


I'd be interested to learn more about that.

My mental model was that Apple provides backdoor decryption keys to China in advance for devices sold in China/Chinese iCloud accounts, but that they cannot/will not bypass device encryption for China for devices sold outside of the country/foreign iCloud accounts.


It's probably being run on an enterprise-managed mac. The only person who can be locked out via encryption is the user.


This is a true conspiracy .


Seriously? Crowdstrike is obviously NSA just like Kaspersky is obviously KGB and Wiz is obviously Mossad. Why else are counties so anxious about local businesses not using agents made by foreign actors?


KGB is not even a thing. Modern equivalent is FSB, no? I'm skeptical. I don't think it's obvious that these are all basically fronts, as much as I'm willing to believe that IC tentacles reach wide and deep.


All SIEM instances certainly contain a lot of sensitive data in events, but I'm not sure if most agents forward all environment variables to a SIEM.


Agents don't just read env vars and send them to SIEM.

There's a triggering action that caused the env vars to be used by another ... ehem... Process ... that any EDR software in this beautiful planet would have tracked.


No it logs every command macOS runs or that you type in a terminal. Either directly or indirectly. From macOS internal periodic tasks to you running “ls”.


The certificate private key is also a secret.


> Because this has been the case forever with all security and SIEM tools.

Why?

There is no need to send your environment variables.


Otherwise malware can hide in environment variables


Ok, suppose you're right.

Why are they only doing it for macs then?


I don't think this is limited to just Macs based on my experience with the tool. It also sends command line arguments for processes which sometimes contain secrets. The client can see everything and run commands on the endpoints. What isn't sent automatically can be collected for review as needed.


It does redact secrets passed as command line arguments. This is what makes it so inconsistent. It does recognize a GitHub token as an argument and blanks it out before sending it. But then it doesn’t do that if the GitHub token appears in an env var.


It may depend a bit on your organization but I bet most folks using an EDR solution can tell you that Macs are probably very low on the list when it comes to malware. You can guess which OS you will spend time on every day ...


So because macs are not the targets of malware ... we're locking them down tighter than any other system?


No, see, they're leveling the playing field by storing all secrets they find on macs in plaintext


Malware can hide in the frame buffer at modern resolutions. They could keep a full copy of it and each frame transition too.


They do not need to take the data off the computer to do that


What do you think grants the access to the infra or ability to get a certificate?


Arbitrary bad practices as status quo without criticism, far from absolving more of the same, demand scrutiny.

Arbitrarily high levels of market penetration by sloppy vendors in high-stakes activities, far from being an argument for functioning markets, demand regulation.

Arbitrarily high profile failures of the previous two, far from indicating a tolerable norm, demand criminal prosecution.

It is recently that this seemingly ubiquitous vendor, with zero-day access to a critical kernel space that any red team adversary would kill for, said “lgtm shipit” instead of running a test suite with consequences and costs (depending on who you listen to) ranging from billions in lost treasure to loss of innocent life.

We know who fucked up, have an idea of how much corrupt-ass market failure crony capitalism could admit such a thing.

The only thing we don’t know is how much worse it would have to be before anyone involved suffers any consequences.


Most sane SIEM engineers would implement masking for this. Not sure if CS still uses Splunk but they did at one point. No excuse really.


"Oh, but our system is so secure, you don't need other layers."


Sounds like a purposeful leak to test the waters, and get the bidding war started. I wouldn’t be surprised if Valve execs are simply looking to capitalise for investors.

$16B is far too cheap for a company like them though. They almost pull that in revenue in a single year without locking out third party marketplaces. Their margins are notoriously high.

I’d triple it.


I would say Valve pulls in 70 to 80 billion a year in sales and technology.


He said as much directly in the comments of that post.


Cybersecurity nerd here, have talked to many platform and financial company CISO’s, security teams and recruiters over the past few years.

Fake interviewees are pretty rampant. We’re getting to the point where presenting yourself in-person to a government representative, agency or a private attestation company will be part of the onboarding process. At this point it looks like it’ll be iris scans.

In the US it’s even an issue in-person with H1B’s where they get interviewed and hired online, then someone else shows up.

Also the fact that insider threats are almost never budgeted for, and so many companies blanket-approve access to systems like logging systems, customer support systems, source code, etc - means attackers don’t even need to get hired into a very important role to get the data they want.


> At this point it looks like it’ll be iris scans.

Oh god no, a piece of not-that-secret information which can't be revoked when eventually leaked.

"This is a courtesy alert that your iris scan has been found on the darkweb..."


Like your name or address?


You can change your name and address, you cannot change your irises.


That argument works for fingerprints because it’s possible to replicate them (kind of) but how do you replicate someone’s eyeballs assuming supervised setup ?


If we assume "supervised setup", then doesn't that negate the fingerprint issue too because a supervisor can tug off fake-fingers and wash tips with alcohol etc?

Either way, I think this is one of those "if it was used properly, people won't like the limitations, so they'll use it improperly" situations. Kind of like with social security numbers.


No, because nobody trusts full-names or addresses the same dumb way that they wish they could trust iris-data or fingerprint-data.

"Welcome to Acme Bank. To prove you are the owner of this bank-account, please supply your full name and street address. *ding* Authentication successful! Please choose an amount to transfer."

At best, biometrics can only replace usernames. In other words, information that is quasi-public and not expected to be easily changeable... With the additional problem that sometimes it changes all on its own.


> then someone else shows up

someone else shows up ? what is the denominator


We've seen that also. You interview someone online, help file the paperwork to get into the country and work visum, and then someone else, a technically much weaker guy, shows up on the first day. We had that twice in the last two years


wild. So the paperwork and the visa etc was all actually filled out with the details of the weaker guy? Including the photo and suchlike? crazy that gets past the companies immigration lawyers Also, doesnt he just get fired straight away and lose his visa? Seems like very high effort and low chance of success, I must be missing something


This is in EU. You have three months to find a new job, which I found here after your comment made me think [1].

This may be a reasonable gamble, as many companies do probably not expect such a rochade, and do not have detective measures in place. And if that doesn't work, you still have 3 months time to find another job in an environment that favored job seekers at this time.

[1] https://expatrist.com/losing-a-job-in-germany-with-eu-blue-c...


GUI.


If the cognitive load of migrating away from Windows is too much, ManageEngine is free for your use-case (for forcing updates/policies, monitoring, and managing access). I'd look at using Assigned Access too (previously known as Kiosk Mode) for locking down their environment to things like browsing the internet.. And nothing else: https://learn.microsoft.com/en-us/windows/configuration/assi...

However, locked down Chromebooks and Android profiles are generally the best way to go. Not sure about the Apple ecosystem (even though that's what I'd choose - just haven't found readily available advice).


Add region or locations immediately. Single biggest factor outside of customer size or how much product they’re buying.

Also contract length. Pretty common to get deep discounts with multi year agreements.


Useful info. Might want to make the ordered quantity and contract length ranges though, re: fuzzing anything that might identify the client


Update: we've added geography to buyer attributes and clarified the contract length


Mm, yea - that's a buyer attribute we can add to clarify.

And definitely! We do have contract length, though that's locked away as a Pro feature


As someone who both had a loved one take their own life as they progressed through symptoms of dementia in the last 6 months, and also had a loved one succumb to it in their 90s (and spent the latter quarter of their life in care without knowing who anyone was, what they enjoyed, or what year it was), I feel the former was far more dignified and overall more humane on both the family and loved ones, the person, and society - at least until we understand how to fix these illnesses.

It’s sad that the argument of “but what if”, and slippery-slope fallacies immediately shut down even the most strict, bureaucracy-filled approach which requires the person to have no significant assets for a period of time (to be certain they aren’t being forced by family), or mandatory psychologist assessments.

Please note, I said dignified. But because they weren’t able to notify or tell anyone their plans (I assume so as to not burden or implicate them), because they were unable to take the euthanasia path - we found them 3 days later. The weather was not cold.

This is the reality that families face when no choice is made possible by the state.


Sorry for your losses!

My exposure to these issues is rather more mixed. One of my grandmothers suffered from dementia. During the early stages, she was aware and suffered quite badly. But oddly enough, once the dementia was full blown, she lived several more years happier than she'd ever been, because the worries and restraints which had bothered her all her life were gone.

Another relative died in the earlier stages of dementia, and his last two years were fairly miserable. And now there is a relative who is also in the earlier stages, and who is determined to avail themselves of assisted suicide rather than live with dementia.

Speaking with their doctor, we learned how exactly the procedure works — in our country, it just requires a doctor's note, but the doctor will NOT issue that note if the patient already HAS full blown dementia (for obvious reasons). So you would have to request euthanasia relatively early in the process.

My relative is convinced that they are just fine, and that it's really hackers and conspiracies out to get them. Should I press the issue and insist that they are in the earlier stages of dementia and should get euthanized before it's too late for that option? I've chosen not to that — if they make that decision, I will support it, but I will not attempt to influence them one way or the other beforehand.


I don't see how a design decision made by the maintainers of this project, which is documented as version changes are made - and completely transparent given the nature of the project (open-source), constitutes a high priority support request from a software engineer external to the project? Sure, it's a pain for adopters who were dependent on the feature, but you can't criticise design decisions made by someone who offers your software for free. You can, however, fork it and take your own path with the software.

Are you saying because a new version breaks the current build of a downstream proprietary application, that should constitute a high priority support request from the maintainers of the project?

That doesn't compute with me. If they were paying for forwards-compatibility and had that expectation in a contract, sure. But they should be able to make changes they see fit without having to make trade-offs to ensure future compatibility with an Enterprise organisation's product.

At that point they're basically making design decisions to suit the Enterprise, at which case - you're not just free resources for software engineering for the organisation, you're actively pushing your project to be vendor-compatible. I could see this reasoning if your project is largely funded by them (Chromium), but in this case, they're not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: