There was recent discussion about how making AI to write the validation for the code is a good approach. If you have formal proofs for your code, your QA needs go down.
Yep. Claude today? No way can it achieve this. It can barely write a working C compiler.
I'm looking at the trend line. A few years ago it couldn't make a simple webpage. Now it can make a bad C compiler in thousands of dollars of tokens. What does it look like in another few years? Or another 2 decades?
> It's much riskier to hire the wrong employee now. If a bad hire is made...
There is a thing called employment at will. Particularly in California. Makes firing fast.
There is nothing particularly risky about firing employees in many cases. 2 months firing is a joke compared with 4 months at least, which those same companies spend looking for talent.
The main reason is not enough ideas what to do - economy covers basics, and not enough new companies appear to offer new good products.
I have two fundamental problems with Postgres - an excellent piece of technology, no questions about that.
First, to use Postgres for all those cases you have to learn various aspects of Postgres. Postgres isn't a unified tool which can do everything - instead it's a set of tools under the same umbrella. As a result, you don't save much from similarly learning all those different systems and using Postgres only as a RDBMS. And if something isn't implemented in Postgres better than in a 3rd party system, it could be easier to replace that 3rd party system - just one part of the system - rather than switching from Postgres-only to Postgres-and-then-some. In other words, Postgres has little benefits when many technologies are needed comparing with the collection of separate tools. The article notwithstanding.
Second, Postgres is written for HDDs - hard disk drives, with their patterns of data access and times. Today we usually work with SSDs, and we'd benefit from having SSD-native RDBMSes. They exist, and Postgres may lose to them - both in simplicity and performance - significantly enough.
Well, take a look at the dates of when Postgres was created and when SSDs become available. Better, find articles about internal algorithms, B-trees, times of operations like seeks etc. The Postgres was initially written with disk operation timings in mind, and the point is that's changing - and I haven't heard of Postgres architecture changing with that.
That blog post is very light on details can be condensed to a single line/paragraph. LSM trees are more efficient for SSDs and modern databases use them.
I don't know enough to comment yet but will go read about it.
This really isn't true. You should use different parameters (specifically, you can reduce the random_page_cost to a little over 1) on a SSD but there isn't a really compelling reason to use a completely different DBMS for SSDs.
SSD-native RDBMS sounds good in theory! What's in mean in practice? What relational databases are simpler and more performant? Point me in their direction!
What kind of the problem you're talking about compared to existing satellites? That is, all existing satellites generate power, and need to dissipate that power, and most of it goes to waste heat, and the satellites somehow do that successfully - what is the specific problem you're talking about, which can't be solved by the same means?
The numbers matter. The thermal budget a satellite is an tightly controlled thing. Large modern ones are in the order of a few to a couple of 10s of kilowatts, so something like a few to several low 10s of modern GPU compute power. Even with thousands of yet to be designed or launched satellites, it's going to have trouble competing with even a single current DC, plus it is in SAPCE for some reason, so everything is more expensive for lots of reasons.
> it's going to have trouble competing with even a single current DC
This looks like a valid argument to me, yes. Elon mentioned 1,000,000 satellites - I'm thinking about 3rd version of Starlink as a typical example, 2 tons, 60 satellites per Starship launch, 16,000 Starship launches for the constellation, comparing with 160 launches per year of today's Falcon 9...
The argument about problems of dissipating heat still stands - I don't see a valid counterargument here. Also "SAPCE" problem looks different from the point of view of this project - https://www.50dollarsat.info/ . Basically, out launch costs go way down, and quality of electronics and related tech today on Earth is high enough to work on LEO.
The ISS's radiators weigh thousands of kilograms to radiate around 70 KW. He's talking about building data centers in space in the GW range.
Assuming he built this in LEO (which doesn't make sense because of atmospheric drag), and the highest estimates for what starship could one day deliver to LEO (200 metric tons), and only 1 metric ton of radiators per 100KW, that's 50 launches just to carry up the radiators.
Even the buses for giant communications satellites are still at the single digit kilowatt scale. The current state of the art in AI datacenters is 500+ kw per rack.
So you're talking about an entirely different scale of power and needed cooling.
Principally speaking, as much energy as satellite receives from solar panels it needs to send away - and often a lot of it is in the form of heat. So, the question is, how much energy is received in the first place. We currently have some quarter of megawatt of solar panels of ISS, so in principal - in principal - we know how to do this kind of scale per satellite. In practice we perhaps will have more smaller satellites which together aggregate the compute to the necessary lever and power to the corresponding level.
> We currently have some quarter of megawatt of solar panels of ISS
It's average outbut is like half of that though. So something the size of the space station, a massive thing which is largely solar panels and radiators, can do like 120kW sustained. Like 1-2 racks of GPUs, assuming you used the entire power budget on GPUs.
And we're going to build and launch millions of these.
The reason we dont have a lot of compute in space, is because of the heat issue. We could have greater routing density on communication satellites, if we could dissipate more heat. If Starlink had solved this issue they would have like triple the capacity and could just drop everything back to the US (like their fans think they do) rather than trying to minimise the number of satellites traffic passes through before exiting back to a ground station usually in the same country as the source. In fact, conspiratorially, I think thats the problem he wants to solve. Because wet dreams of an unhindered, unregulated, space internet are completely unanswered in the engineering of Starlink.
I have actually argued this from the other side, and I reckon space data centres are sort of feasible in a thought experimental sense. I think its a solvable problem eventually. But heat is the major limiting factor and back of the napkin math stinks tbh.
IIRC the size/weight of the satellite is going to get geometrically larger as you increase the compute size due to the size of the required cooling system. Then we get into a big argument about how you bring the heat from the component to the cooling system. I think oil, but its heavy again, and several space engineering types want to slap me in the face for suggesting it. Some rube goldberg copper heatpipe network through atmosphere system seems to be preferred.
I feel like, best case, its a Tesla situation, he clears the legislative roadblocks and solves some critical engineering problem by throwing money at it, and then other, better people step in to actually do it. Also triple the time he says it will take to solve the problem.
And then, ultimately, as parts fail theres diminishing returns on the satellite. And you dont even get to take the old hardware to the secondary market, it gets dropped in the ocean or burnt up on reentry.
I wonder when we'll get hexagonal lanes, triangular and Penrose tiling. Rest assured, there will be practically infinite set of features designers would invent. Language designers would do good takubg into account Scheme idea: language is good when there is nothing to remove.
But css is not a “programming language” it’s a negotiation between browser engineers (who need to keep things fast and responsive) and web devs (who need to implement a fashionable design that is still distinguising for their brand)
> Cookie cutter design is what I like. I can compare the companies when they all have the same template for a website.
Any reference?
Also I do feel like some people prefer animations. Maybe not the Hackernews crowd itself per se. But I think that having two options (or heck three the third one being really just pure html just text no styling maybe some simple markdown) is something good in my opinion.
Honestly I do feel like 1-2 animations are okay with a website but the award winning websites really over spam it in my opinion
I think maybe the amount of animations in https://css-tricks.com might be nice given that those guys/website teaches other people about animation themselves and have only 1 maybe 2 animations that I can observe interacting with their website and I do feel like that's for good reason (they don't want animations to be too distracting)
I personally don't know, I personally have never built any such websites but recently wanted to and I was looking at gsap tuturials on today & I do feel like one of the frustrations I feel is that these animations don't respect the browser sometimes to have animations (Scroll animations being the first one) but I even watched some designers talk about how much important scroll animations are (them betting that every award winning website has scroll animations)
Even https://ycombinator.com has a lots of animations & Css features & people on HN did love it from what I could tell. So to me, it does feel as if there is no one size fits all.
That’s a very engineer thing to say. Most people are definitely different from you, and that’s why CSS is increasing in scope.
Also, if everyone is implementing the same Jumbotron design again anyway, why not standardise that and support it right away instead? That’s how we got a bunch of features recently, like dialogs, popovers, or page transitions. And it’s for the better, I think.
A strong reason to use llms today is accessing plain text information without needing to interface with someone else stupid css.
You really think the general sentiment around css is: yay things are improving?
another strong reason to use llms: no needing to write css anymore.
I don’t care about the general sentiment when I state my personal opinion. There are definitely people who like CSS and the direction it moves to.
And that being said: the ability to express something in a single CSS directive as opposed to a special incantation in JavaScript is an objective improvement, especially with LLMs.
I think the plan going forward is to allow people to implement their own CSS layout primitives using Houdini, but I haven't kept track of how it has evolved or progressed.
Microsoft MS-DOS and Windows supported this in the 90s with DriveSpace, and modern file systems like btrfs and zfs also support transparent compression.
reply