Hacker Newsnew | past | comments | ask | show | jobs | submit | FridgeSeal's commentslogin

I think you’re missing the sarcasm in their comment.

They’re saying that the emoji usage is telling them that very little effort was put into the PR and that they’ll treat it accordingly.


Haha! Thanks!!!

My apologies!, sincerely.

(If only the message I was responding to had had emojis and checkmarks for me to efficiently process it!!!!)


> logs, media, ML artifacts, raw dumps, etc., none of which fit into a table format.

You would be appalled at the kind of stuff I have seen teams stuff into parquet and iceberg tables.


Ha. The fact that teams reach for iceberg to organize things that aren't really tables is itself a symptom of needing better management tools for other types of data.

Sure, but that’s not an S3 concern, because the vast majority of people use S3 as it is, without needing additional management machinery.

The solution is just to spin up the machinery you need for your solution, rather than making S3 cover all possible bases.


> that will teach them something. If they care about getting better,

This pre-supposes the idea that the business is _willing_ to let that happen, which is increasingly unlikely. The current, widespread attitude amongst stakeholders is “who cares, get the model to fix it and move on”.

At least, when we wrote code by hand, needing to fix things by hand was a forcing function: one that now, from the business perspective, no longer exists.


This is what I have been thinking. Business will always try to do more with less because their only true goal is figuring out how to make more money. They will sacrifice giving those juniors time to learn from their mistakes for the sake of making more widgets (code). From the wider generational view, they will rob today's juniors from the chance to learn and thereby keep the talent pipeline full so they can profit today, the future (and the developers who will arrive there) be damned. The economic game is flawed because it only ever comes down to a single output that is optimized for: money. One solution? I think software people might consider forming unions. I know that's antithetical to the lone coder ethos, but if what this comment reflects is true, the industry needs a check and balance to prevent it from destroying its foundation from the inside.

Why would a business train anyone when they can lean on the govt to provide unbankruptable loans to the student to go to university to learn themselves

If it’s broken and the dev can’t debug it, the business won’t have much of a choice.

There is a lot of space between broken and high quality that won't necessitate any business letting people "learn" on the job.

That’s also true without AI. Engineers want more time to polish and businesses want to ship the 80/20 solution that’s good enough to sell. There's always going to be a tension there regardless of tools.

Don't you see the problem? Now engineers literally do not have any leverage. Did the model make it work? Yes? Then ship it, what are we waiting around for?

That sounds pretty much the same as it’s always been? It used to be: “Does the happy path work? Then ship it! There’s no time to make it robust or clean up tech debt.”

Now there actually is time to make things robust if you learn how to do it.


> Now there actually is time to make things robust if you learn how to do it.

What makes you think you are going to be given time to polish it? You would be pushed to another project. You have more responsibilities with none of the growth.


It takes very little time to polish now.

Again, provided you either had the skills.

Or you’re getting the model to do the polishing, thereby developing no skills of your own, and we’re back to the start.


Getting the model to do it is the skill.

That's an adorable idea, but requires willfully ignoring the existence of the Jevons Paradox.

You’re assuming that building something robustly is significantly more time consuming than the “quick and dirty” version. But that’s not really true anymore. You might need to spend another hour or two thinking through the task up front, but the implementation takes roughly the same amount of time either way.

One cannot build something robust just by thinking about it _a priori_, and while this was somewhat at the periphery of the author's argument, it is important.

You can’t get every detail right up front, but you can build a robust foundation from the beginning.

The argument seems to be that AI is causing managers to demand faster results, and so everything has to be a one-shotted mess of slop that just barely works. My point is that it doesn’t take much longer to build something solid instead. Implementation time and quality/robustness are not tightly coupled in the way they used to be.


No but why doesn’t this object-storage-primitive accommodate all my specific requirements already?

They should also accommodate my need for all POSIX filesystem API’s included cheap-moves and renames!!!!!

/s


POSIX isn't the ask. Datasets are. The need to keep track of what data you have stored is universal, not my specific requirement.

I make the (glib) comment, because it’s a similar argument to the one that was popular a few years ago.

S3 is an object store. Treat it more like a KV store. As other comments have pointed out, the solution here is pick-your-favourite-metadata-store, be it Postgres, or what iceberg does, and other data on S3.


So the follow up question, is why is a random website, allowed to try and load arbitrary files?

This is how I interpreted the original question and indeed it makes no sense, JavaScript from a website should not be allowed to interact with extensions like this.

It's actually the extension injecting itself into the webpage, often to interact with it. (I imagine much of this is just looking for global ExtensionName objects.)

Actually, the article is clear about what is happening technically, and it’s both. Chrome does, in fact, allow the page to make requests for resources stored in the extension bundle, and this is one of the two fingerprinting methods that the article describes.

>JavaScript from a website should not be allowed

Agreed 100%.


I agree, and this is why I built 404. If you poke around the page a bit, you'll see a tool that prevents browser fingerprinting.

404 catches JS calls in JS proxies and returns mocked-up values (assigned by a profile), it also has protections against TLS fingerprinting, canvas fingerprinting, device enumeration, TCP/IP fingerprinting, HTTP header fingerprinting, and more.

The predatory practices that browser fingerprinting have enabled guised behind "fraud protection" are atrocious. Even with a VPN, even in incognito mode, a website can track me and see what I've been doing EVEN IF ITS NOT ON THEIR SITE.

Then a data broker buys all this data and uses an AI model to put it all into a pretty little package and sell it to Google, or the gov't, or something. It's scary.


Because extensions can and often do contain stuff like images or JS bundles that they inject into a target page's DOM. Not allowing a tab's context to load files from the chrome-extension:// namespace would break a lot of things.

True, but you'd expect the same CORS rules to apply for extensions. Only pages originating from an extension are by default able to load resources from said extension.

Chrome exposes these files via a URL that you can fetch in javascript like you would any other file on a normal website. These local extension files usually contain code, styles or images that your browser needs to run the extensions.

Why is it not a CORS violation?

The browser needing access and a random website having access are quite different. Seems like a big ol' pile of vulns waiting to happen.


CORS is a server setting to tell the browser not to load its data from potentially unsafe origins. If you set a server to send access-control-allow-origin: *, then your browser will happily load these resources for you regardless of where you currently are. And chrome extensions need to be loadable from everywhere to be able to inject code or images into pages, so enabling CORS for them would defeat their main purpose. The extensions themselves might even need to bypass an existing CORS setup for the website you are currently on to fetch additional data.

From the other end, yes extensions access all page data, but pages shouldn't access extension data at all; it feels like that should be the CORS violation.

You have it backwards. For an extension to work on a page, it's data/code needs to be accessible from said page. If your extension server in chrome enforced CORS to prevent access from tabs on other websites, extensions wouldn't work anywhere.

LinkedIn is a cesspool, but it’s almost worthless to me without the recruiters.

They’re basically the only reason I’m there.


Also a lack of LinkedIn account makes you more suspicious and less likely to get hired. So this is additional value in having an account. For appearances.

Yeah I recently heard about people working multiple jobs at once - I wasn't surprised - with work from home being a thing and many jobs at big companies being not overly strenuous, you can get away with it.

A previous coworker had been not especially good at his job and left after two months, and a little later I went looking for his LinkedIn to see where he'd ended up. Couldn't find him but didn't give it much thought. A friend told me that he was working at a company up the street but was also working another job at the same time, and the penny dropped - you can't have LinkedIn and be working two jobs at once and reasonably expect to get away with it or get hired again.


That really depends on the field. Only one position asked about my LinkedIn. And that was because they had you apply via the site.

I didn't apply, because fuck that inside out.


Don’t sell yourself short!

You could achieve things yourself if you tried!


I worked at a place where they refused to run it _anywhere_ because a couple of people were insistent that it was “insecure”.


... and they were right.

v6 adoption is often an all or nothing, because if you run both stacks, you have to ensure they are consistent. While you can reasonably do it on your home LAN, doing it across an entire infrastructure is the worst.

Now you have to make sure all your subnets, routing, VLANs, firewall rules, etc work exactly the same in two protocols that have very little in common.

It is the equivalent of shipping two programs in different languages and maintaining exact feature parity between both at all times.


I genuinely don’t understand this. The concepts are nearly identical between the two.


Hum no, to me they are orthogonal.

v4 was built around the idea of multiple free standing networks linked by gateways. v6 was built around the idea of a universal network.

I dont care about what your LAN adress space look like when I'm in my LAN, because we are not in the same v4 network. I am sovereign in my network.

With v6, everyone is effectively in the same network. I have to ask my ISP for a prefix that he will rent me for money even for my LAN. If I want some freedom from said ISP prefix, I am mercifully granted the honor of managing ULA/NAT66 (granted I paid for a fancy router).

Also if I want any kind of privacy, I will have to manage privacy extensions and the great invention of having to use automatically generated, dynamically routed, essentially multiple random IPs per interface. How lucky am I to use such a great new technology.

Seriously v6 was created by nerds in a lab with no practical experience of what people wanted.


> v4 was built around the idea of multiple free standing networks linked by gateways

It was absolutely not. This is why early companies like Apple and Ford got massive IP allocations - each computer was expected to have a unique IP address.

NAT didn't exist until 14 years after IPv4 was created, in response to the shortage of IPv4 addresses, and in the RFC it is described as a "short-term solution", very clearly stated that his not how the internet is designed to work and it should only be used as a stopgap until we get longer addresses.


> v4 was built around the idea of multiple free standing networks linked by gateways.

I don't think this is what v4 was built around, but rather what v4 turned into.

CIDR wasn't introduced until 1993. NAT in 1994. Both to handle depleting IP addresses.


v4 and v6 were build around the exact same use cases.

> With v6, everyone is effectively in the same network.

Just like IPv4.

> I have to ask my ISP for a prefix that he will rent me for money even for my LAN.

Just like IPv4, if you need a static address.

> If I want some freedom from said ISP prefix, I am mercifully granted the honor of managing ULA/NAT66 (granted I paid for a fancy router).

Compared with IPv4, where if you want some freedom from said ISP subnet, you are mercifully granted the honor of managing RFC-1918 addresses/NAT (granted you paid for a router that doesn't screw it up).

> Also if I want any kind of privacy, I will have to manage privacy extensions

...which are enabled by default nearly universally

> and the great invention of having to use automatically generated, dynamically routed, essentially multiple random IPs per interface.

Make up your mind. Are rotating, privacy-preserving addresses good or bad? The way it works in real life, not in the strawman version, is that you (automatically!) use the random addresses for outgoing connections and the fixed addresses for incoming.


If you want static addresses in LAN, you can use link local addresses for that.


This is exactly why I decided not to enable IPv6 on my colo. When money is involved, the benefits of IPv6 simply do not outweigh the risk, in my estimation. If my side gig eventually pays enough to pay a contractor to handle networking then sure, that'll be one of the first tasks. But when it's just me managing the entire stack, my number one priority is security, and for now that means keeping things simple as possible.


If you say that too loud, the “but my brands unique UI supersedes your functional requirements” people will emerge, screeching, from the woodwork!

I can’t prove it, but I just know they’re the ones who live their lives one NPS score at a time, and must think that we operate our software, being thankful for every custom animation that they force us to sit through on their otherwise broken and unimportant software.


It sounds silly, but apart from liking the sound, this is why I really like wheels with loud hubs.

I have a pair of Hunt wheels and they work fantastically, bonus points because they are “always on”, pedestrians are aware of them, but are never surprised.


I hate loud hubs. So disturbing. Also comes across as passive aggressive.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: