Hacker Newsnew | past | comments | ask | show | jobs | submit | harikb's commentslogin

The guy who bought friendster.com lurks here

Probably worth a fair bit more than $30k though.

Is the last photo on that page, describing the cabling, a screenshot of another photo displayed using flipdiscs? that is a whole lot of discs!!

I think it's just a simulation of what it could look like if it were flipdiscs.

On the credentials point. Here is what I find.

Day 1: Carefully handles the creds, gives me a lecture (without asking) about why .env should be in .gitignore and why I should edit .env and not hand over the creds to it.

Day 2: I ask for a repeat, has lost track of that skill or setting, frantically searches my entire disk, reads .env including many other files, understands that it is holding a token, manually creates curl commands to test the token and then comes back with some result.

It is like it is a security expert on Day 1 and absolute mediocre intern on Day 2


I found the same, it was super careful handling the environment variable until it hit an API error, and I caught in it's thinking "Let me check the token is actually set correctly" and it just echoed the token out.

( This was low-stakes test creds anyway which I was testing with thankfully. )

I never pass creds via env or anything else it can access now.

My approach now is to get it to write me linqpad scripts, which has a utility function to get creds out of a user-encrypted share, or prompts if it's not in the store.

This works well, but requires me to run the scripts and guide it.

Ultimately, fully autotonous isn't compatible with secrets. Otherwise, if it really wanted to inspect it, then it could just redirect the request to an echo service.

The only real way is to deal with it the same way we deal with insider threat.

A proxy layer / secondary auth, which injects the real credentials. Then give claude it's own user within that auth system, so it owns those creds. Now responsibilty can be delegated to it without exposing the original credentials.

That's a lot of work when you're just exploring an API or DB or similar.


I think it is just because they are having to load shed! Some days you may be getting much less compute - the main way "thinking" operates, is to just iterate on the result a few more times

> with secrets possibly baked into source

please don't suggest this. The right way is to have the creds fetched from a vault, which is programmed to release the creds auth-free to your VM (with machine level identify managed by the parent platform)

This is how Google Secrets or AWS Vaults work.


> The right way is to have the creds fetched from a vault, which is programmed to release the creds auth-free to your VM

Or have whatever deployment tool that currently populates the env vars instead use the same information to populate files on the filesystem (like mounting creds).


Next.js renders configuration that’s shared by client and server into a JSON blob in the HTML page. These config variables often come from environment variables. It’s a very common mistake for people to not realize this, and accidentally put what should be a server-only secret into this config. I’ve seen API secrets in HTML source code because of this. The client app doesn’t even use it, but it’s part of the next config so it renders into the page.


IIRC, react had this issue so they required env vars seen in react to be prefixed by REACT_ The hope being that SECRET is not prefixed and so is not available. Of course it requires you to know why they are prefixed and not make REACT_SECRET


That's essentially what NEXT_PUBLIC_ is for... but serializing process.env is a new one for me.


They don’t serialize process.env, but devs will take config values from environment variables. Obviously you’re not supposed to do this but it’s a footgun.


I was reffering to Vercel. Other cloud environments have much better mechanisms for securing secrets.


+1 on vaults. One step further: credentials that never land in the runtime environment at all. App authenticates to a gateway via workload identity, gateway proxies the call, process never sees the secret. Makes env enumeration useless even with valid admin access (I work on an open-source tool in this space, so I'm biased).

This is just another layer of indirection (which isn't bad; it adds to the difficulty of executing a breach). The fundamental problem with encrypted secrets is that at some point you need to access and decrypt them.


Lifetime is the underlying issue.

For example, it is possible to create a vault lease for exactly one CI build and tie the lifetime of secrets the CI build needs to the lifetime of this build. Practically, this would mean that e.g. a token, some oauth client-id/client-secret or a username/password credential to publish an artifact is only valid while the build runs plus a few seconds. Once the build is done, it's invalidated and deleted, so exfiltration is close to meaningless.

There are two things to note about this though:

This means the secret management has to have access to powerful secrets, which are capable of generating other secrets. So technically we are just moving goal posts from one level to another. That is fine usually though - I have 5 vault clusters to secure, and 5 different CI builds every 10 minutes or so, or couple thousand application instances in prod. I can pay more attention to the vault clusters.

But this is also not easy to implement. It needs a vault cluster, dynamic PostgreSQL users take years to get right, we are discovering how applications can be terrible at handling short-lived certificates every month (and some even regress. Grafana seems to have with PostgreSQL client certs in v11/v12), we've found quite a few applications who never thought that certs with less than a year of lifetime even exists. Oh and if your application is a single-instance monolith, restarting to reload new short-lived DB-certs is also terrible.

Automated, aggressive secret management and revocation imo is a huge problem to many secret exfiltration attacks, but it is hard to do and a lot of software resists it very heavily on many layers.


I'm not sure that's necessarily a "problem", though it is fundamental to secrets. We wouldn't say that it's a fundamental problem that doors on houses need a key--that's what the key is for--the problem is if the key isn't kept secure from unauthorized actors.

Like, sure, you can go HAM here and use network proxy services to do secret decryption, and only talk from the app to those proxies via short-lived tokens; that's arguably a qualitative shift from app-uses-secret-directly, and it has some real benefits (and costs, namely significant complexity/fragility).

Instead, my favored option is to scope secret use to network locations. If, for example, a given NPM token can only be used for API calls issued from the public IP endpoint of the user's infrastructure, that's a significant added layer of security. People don't agree on whether or not this counts as a "token ACL", but it's certainly ACL-like in its functionality--just controlled by location, rather than identity.

This approach can also be adopted gradually and with less added fragility than the proxy-all-the-things approach: token holders can initially allowlist broad or shared network location ranges, and narrow allowed access sources over time as their networks are improved.

Of course, that's a fantasy. API providers would have to support network-scoped API access credentials, and almost none of them do.


Speaking of fantansies...another approach would be holder binding: DPoP (RFC 9449) has been stable for a couple of years, AWS SigV4 does it too. The key holder proves control at call time, so a captured token without the key is useless.


Yep. Then you run into the issue of where to store the secret encryption key.

Security researchers always need to give an answer whenever there's a security incident and the answer can never be "too much centralization risk" even when that is the only reasonable answer. You can't remove centralization risk.

IMO, the future is; every major centralized platform will be insecure in perpetuity and nothing can be done about it.


HSMs & similar can at least time-limit access to secrets to the period where an attacker can make requests to the HSM.


I think the problem is the way we are using these "secrets" services traditionally. The requesting process/machine should NEVER see the Oauth client secret. The short-lived session token should be the only piece of data the server/client are ever privy too.

The service that encrypts the data should be the ONLY service that holds the private key to decrypt, and therefore the only service that can process the decrypted data.


The service wouldn't have access to the refresh token? How does authentication with the client-secret-holding intermediary work?

It's easy to see how this would work with sufficiently sophisticated clients in some use-cases, say via a vault plugin, but posing this as a universal necessity feels like a big departure from typical oauth flows, and the added complexity could be harmful depending on what home-grown solutions are used to implement it.


"The parent platform" yada yada, my parent platform is bare metal, how about that?


The providers themselves can't keep this straight even within their own ecosystem. Plus everyone is running at a million miles/hour.

For example `Claude code` used to set 2 specific beta headers with some version numbers for their Max subscription to be supported.

Oauth tokens for Max plan is different from how their API keys looked. They kind of look similar, but has specific prefix that these tool pre-validate.

It is barely working at this point even within a single provider


A comment from the PR

> Not a serious problem, but the weekdays are wrong. For example, 18-Apr-2127 is a Friday, not Sunday.

There is now many magical dates to remember - 2126 ( I think PR was updated after that comment) and 2177. There is also 2028 also somewhere.


The only people panicking are probably those state level actors who were using these for their own benefit.


Somewhat unrelated to the language itself:

> The compiler bootstraps through 3+ generations of self-compilation.

I guess it applies to any language compiler, but f you are self-hosting, you will naturally release binary packages. Please make sure you have enough support behind the project to setup secure build pipeline. As a user, we will never be able to see something even one nesting-level up.


I feel like there's too much of a fetish for self-hosting. There's this pernicious idea that a language isn't a 'real' language until it's self-hosted, but a self-hosted compiler imposes real costs in terms of portability, build integrity, etc.

If I ever write a compiler - God forbid, because language design is exactly the kind of elegance bike-shedding I'll never crawl my way out of - it's going to be a straight-up C89 transpiler, with conditional asm inlines for optional modern features like SIMD. It would compile on anything and run on anything, for free, forever. Why would I ever give that up for some self-hosting social cachet?


If you wrote the C89 outputting transpiler in your own language it would still be just as portable.


I'd be dependent on pre-existing binaries that are closely wedded to a particular platform (OS, libc etc), and it over time it would become more and more difficult to attest to build integrity / ensure reproducible builds. (Is the ARM build meant to run an x64 emulator as part of some lengthy historic bootstrapping process?)


Thank you for PartitionMagic!! I remember using it to undo whatever disk partitioning mistake I did when originally setting up a machine :)


I struggled with disappearing icons (like our company VPN client - which wasn't tailscale by the way) thinking the app was somehow "stuck". I would go kill the app, restart machine etc - during restart it would get fixed "automatically" by being an app earlier in the order!

Took me months to figure out it was running afterall and just hidden by the notch.

How hard is for apple to move the "least used icons" to a fold? (but still accessible)


I would love to get a Windows-like overlay which collects all those damn menu icons. The least Apple should do is giving developers proper APIs to build that, but instead Tahoe broke so many menu bar managers it's not funny anymore. Ice, Sanebar, Bartender,... none of them work reliably.


You can hold command and drag the icons under the notch to make the invisible ones eventually show


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: