Hacker Newsnew | past | comments | ask | show | jobs | submit | ufmace's commentslogin


What I'm wondering - their future cash flow may be massive compared to any conceivable rational task, but the market for servers and datacenters seems to be pretty saturated right now. Maybe, for all their available capital, they just can't get sufficient compute and storage on a reasonable schedule.

I don't get the "individual" vs "grouped" element thing at all. I guess you could interpret everything as a list, where null is actually empty list, but it seems like that would be a ton of boilerplate and architecture astronomy, not a universally better way of programming. I don't see any decent explanation of what "grouped-element" programming is and why it's universally better and more advanced.

If that was really true, you should be able to point to a subset of code developed with this supposedly better mindset and demonstrate that it's actually better in terms of some objective measure like speed, memory efficiency, features, security vulnerabilities, etc. I don't see anything like that though. What are the odds that it really is a better way of programming versus a theory some guy made up? If you're familiar with the "no silver bullets" thing, a ton of things have been promoted by various people in the industry as a universal solution to general problems of software quality at various times, but vanishingly few of them have ever made a real difference.

It seems to me that advances in memory management have been one of the few advancements promoted over the decades to make programming genuinely better and more reliable. Witness the dominance of garbage-collected languages for virtually everything that could possibly be done with them, and the advantages of moving things that can't use GC to languages with more solid non-GC memory management like Rust, which are becoming more clear. I think that's the best element we can lean on for software quality, not this grouped-element stuff.


Here's an example. Imagine you're writing a parser that creates AST tree in C++.

What 99% of people do is the "individual" style where every node is allocate with new and the memory comes from generic OS allocator.

Then when you free the AST tree, you have traverse the tree and call delete on every node.

There can be thousands of nodes.

The crucial observation is: the lifetime of all nodes is the same. It's the lifetime of AST tree.

The "grouped" thing is: you have an allocator dedicated to nodes of the AST tree. All nodes are allocated from this allocator.

This has numerous speed and simplicity benefits.

You need one free() call to free allocator vs. thousands of free() for each node.

You can optimize your allocator for that use case compared to general OS allocator. By definition it's a bump pointer allocator (i.e. allocation is mostly addr += sizeof(obj)) which is much faster than what even fastest general allocators can do.

You don't need to track per-allocation metadata that general allocator needs to be able to individually free each allocation. So you use less memory.

There's no fragmentation because your allocator is contiguous space.

The use of cache lines is most likely better because memory is not spread around. The nodes are used together so it's better if they are close in memory.

Those are very significant benefits and yet, as the article notices, very few people are aware of this.

Plus until very recently (before Rust became popular) the only serious low-level language was C/C++ and they don't help you programming in this style.

Odin, zig, jai do. They make an allocator an exposed thing and provide language and standard library support for using different allocators.


In this case the "grouped element mindset" means not only just using allocators but also avoiding RAII by flattening the traditional pointer- based OOP AST into arrays accessed by handles.


Okay, it makes sense that that's what it is. Still, it seems to me like that's more of an optimization technique that's applicable to certain specific scenarios rather than a mindset to view all software development as.

I have actually heard of something like that, as arena allocation, though mostly as something to use in a GCed language to either avoid or reduce the impact of GC cycles, also for certain specific scenarios.


I disagree with AI being part of the OS. IMO, any desktop OS should have absolutely nothing to do with AI. It's only a platform for managing other applications and resources. Remote AI stuff should be on websites only, available only if I choose to go to them and interact, or in apps specifically designed to be AI, like Claude Code or Antigravity.

All the nonsense in Windows 11 has me thinking about trying Linux desktops again for the first time in decades.


You've got nothing to lose by trying but time. Time spent learning something new is certainly better than time spent trying to regain control of your OS and workflow after another mandatory "improvement".


Oh, I've run Linux on the desktop before all right, for years. I eventually decided, at the time, that Windows was drastically better at "just working" for desktop applications and it wasn't worth the bother.

Now, though, Microsoft's antics with Windows 11 is starting to make me think that might not be the case anymore.


Rust might be worth a look. It gets much closer to the line count and convenience of the dynamic languages like Python than Go, plus a somewhat better type system. Also gets a fully modern tooling and dependency management system. And native code of course.


I disagree with the overall point of the article.

I guess maybe they're worse for professional phone reviewers, who switch phones all the time, but I'm not one. In my experience, I think about two-thirds of the time I've gotten a new phone and wanted to switch to it, the SIM card size had changed, so I needed to get a new one anyways, which could only be done by mail order, so took a few more days. And about half of the time the same SIM card did physically fit, something else went wrong, like the APN names wrong, carrier didn't want to let it activate, RCS failed to work, all of which are virtually impossible to troubleshoot. IMO, the dream of universal SIM card portability has been dead for at least a decade, if not longer, and started long before eSIMs came out.

The eSIM on my current phone Just Worked as far as activating. I haven't tried switching to a new phone with it yet, so I guess I'll have to see how well it works when that happens.

Clearly there are cases when both are better. eSIMs are nice for being able to switch carriers immediately, get set up in a new country you're visiting smoothly, and recover the number from a physically lost phone. Physical SIMs are nice if you want to try out a different phone model, assuming they support the same SIM size and you can find the little tool. And also if your phone is seriously damaged but not physically lost. So not everyone necessarily loves them, but I don't think it's a case of the big bad big tech companies are enshittifying everything.


Most of the issues you described, such as carrier registration issues, are just as likely with eSim as they were with physical SIM cards. The difference is that you can't swap out the eSim physically, which was a pretty reliable way of getting around misconfiguration. This isn't really an indictment of eSim as a technology, but the reality is that Telco's are incredibly slow and inefficient, and by removing a workaround for their incompetence, it can make the problem worse.


I wonder if it's the opposite actually. When there is a human running a convenience store type of thing, people don't generally spend time trying to convince them of obviously absurd things, particularly if they work for the same company as you. Nobody wants to risk the employee refusing to sell anything to you because you're a time-wasting jerk or maybe their manager telling them to stop wasting time messing with their co-worker.


Having ridden a few of them in the area, I think they're worth it.

Waymos are 100% reliable. If you book it and it says it's coming at X time, it will definitely actually show up at X time. No more of, driver cancelled at the last minute because they don't actually want to drive to destination Y but Uber etc gave it to them anyways and they get dinged if they just cancel instead of claiming not to be able to find the rider etc. Or driver got lost or stopped for food or gas or something so is late.

It also gets to the destination exactly when it says it will. No weird routes because of the driver's whim or driving too fast or too slow. And no chance of bad music, loud conversation in some foreign language, annoying commentary, etc.

And I want them to be profitable to run too, so they have plenty of incentive to expand the program.


> Waymos are 100% reliable.

There is nothing in this world that is 100% reliable. The vehicles are new. Wait until they start clocking more miles.

> it will definitely actually show up at X time.

In what city and at what time of day? Waymo is just one vehicle in a sea of them. If traffic starts choking the city I don't see how they're not as vulnerable as every other vehicle.


> And no chance of....loud conversation in some foreign language,

Ah!


I've come to think that adding halfway-typing to languages designed from the start to be dynamic is mostly not worth the bother. It may help a little bit sometimes, but there's always going to be holes. If you really want strong typing, it's better IMO to bite the bullet and move to a language designed for it. Let Ruby be Ruby, ditto Python, Javascript etc. Pick up some JVM, .NET, Rust, Go, etc if you really want strong types.


It sounds like the kind of job you really want is going to be a bit of a unicorn. That means those kinds of jobs don't get advertised in job boards and have professional recruiters running around looking for candidates that fit. That in turn means that finding such a job is going to be a lot of networking and shoeleather. You'll probably have to go to a bunch of conferences, talk to people, and make contacts, and hope you can discover a place where your unique skills are a greater value to somebody's project than anyone with expertise in only one of those fields could be.


Well put. The standard practice is to hire a scientist and a developer, both with deep expertise, and have them work together. For a successful collaboration, it's obviously desirable for them to have some cross-disciplinary skills or experience. Ultimately, you're still primarily doing either development or research.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: