Hacker Newsnew | past | comments | ask | show | jobs | submit | ephemeral-life's commentslogin

If this is the what the richest economy in the world looks like, imagine the rest of the world.


Well, the United States ranks 23rd in happiness, so I imagine many of them are quite a bit happier overall!

https://worldhappiness.report/ed/2024/happiness-of-the-young...


Maybe they are more equally poor? Anyone wanna chart that as per capita GDP vs Gini index, as a loose approximation?


Even if google kills flutter, I think the project has reached the point where it can live on without them. The framework and ecosystem has been around for years by now and in the time its been around, its been maintained by some seriously smart people. It can live on with a smaller team, new features will just take a little longer.

Obviously its still sad when people get layed off though.


Google GWT and ecosystem is dead even though it was greatly popular - today nobody consider it to make a new app with that. Same with Facebook Parse - once Meta stopped supporting it most people don't use it even though it was open sourced and seems community still work on it - instead people use Firebase or Supabase.

Flutter is more complex and more ambitious than GWT or Parse - it has currently 12k+ open issues, imagine how many developers you need to resolve all of those. Without Google support Flutter would be practically dead - no company would consider it for any serious new project and instead would go with React Native, PWA, unity, jetbrain compose multiplatform


I tried GWT years ago and it seemed clunky. It reminded me of Swing and SavaJe. Learning Flutter now, which seems vastly better. as you point out "more complex and more ambitious" but also seems much more ready for creating end-user-facing software.


> nobody can use its code on open-source projects any more

This just shows you have no idea what sspl entails. It states you can use it for whatever you want, but if you want to provide the product as a service, you need to share all your infrastructure code for providing the service.

Its basically agpl with a carve-out for AWS and that is so valid because AWS are vultures. They probably make the most money in the world from postgres but aren't even in the top 3 contributors[0].

[0] - https://www.enterprisedb.com/blog/importance-of-giving-back-...


In at lest some (many?) companies AGPL and SSPL are not in the list of allowed licenses (developers not allowed to use open sources components unless they have one of approved licenses). Even if technically they can use Redis not breaking SSPL, lawyers often err on a side of caution.


Except that SSPL has some complex requirements, is not tested in any court, and is not compatible with any other open source license. So you can't for example incorporate the new Redis into a GPL/AGPL piece of software, at all.


Operating a reliable Postgres service is an entirely different set of technologies, expertise, and resources than just the software itself. AWS doesn’t make money off of the Postgres code. They make money off of providing a reliable and hands-off Postgres hosting system that includes compute, security, scaling, backups, and upgrades. The Postgres code itself is only a small part of the work and resources that go into providing such a service. Your dismissive attitude for the expertise and resources that go into quality system operation is really depressing.


No, I think it means that you don’t understand the license. Since it is no longer open-source licensed, I can no longer use code from Redis (after this change) in open-source projects, since the licenses are not compatible.

Ton case it isn’t clear: I can’t now pick up, say, a module from Redis and use it in a GPL/MIT/BSD-licensed (i.e. open-source) project.

I would suggest making some time to research the effects because it obviously doesn’t work the way you think.


taxation reform won't be necessary. Its just straight up unobtainable by most countries.


Does anyone know of good online resources to teach yourself law. CS and engineering have great courses online, surely there are some hidden law gems.


By law here, you mean EU style civil law?


I mean any set of resources that after studying it will make you confident in reading material in law. It doesn't necessarily have to be a specific law system, or is that not how things work?


> I mean any set of resources that after studying it will make you confident in reading material in law. It doesn't necessarily have to be a specific law system, or is that not how things work?

That's mostly not how things work, at least if you mean to imply “justifiably” before “confident”.

Law isn’t physics where there is a universal underlying truth; it is a social construct, and each system of law is its own construct.


There's some overlap, but yea it's really not how things work.

If you're in the US, UK, Aus, Canada, etc, the common law system in use is fundamentally quite different.


On another HN post someone made a really interesting comment about why the US government attacks its own strategic assets. Both Microsoft and Apple do business world wide and funnel cash into the US and these DOJ lawsuits impact how these companies operate globally. The anticompetitive behaviour involved is usually very intricate and it takes a good understanding of these businesses to spot when they cross the line. Most governments will never understand this and won't regulate these companies, so its weird that the US government does it for them.

On the other hand, Apple has shown that it is willing to differentiate product lines over geographic boundaries, so maybe this lawsuit will make apple offer 3 lines of products, for Europe, the US and the rest of the world.


30x is the type of number that when you see it in a generational improvement, you should ignore it as marketing fluff.


From how I understood it, it means they optimised the entire stack from CUDA to the networking interconnects specifically for data centers, meaning you get 30x more inference per dollar for a datacenter. This is probably not fluff, but it's only relevant for a very very specific use-case, ie enterprises with the money to buy a stack to serve thousands of users with LLMs.

It doesn't matter for anyone who's not microsoft, aws or openai or similar.


It's a weird graph... It's specifically tokens per GPU but the x-axis is "interactivity per second", so the y-axis is including Blackwell being twice the size and also the increase from fp8 -> fp4, note it will needs to be counted multiple time as half as much data is needed to be going through the networks as well.


They showed 30x was for FP4. Who is using FP4 in practice?


But maybe you should. Once the software stack is ready for it there'll be more people since the performance gains are so massive.


It would depend highly on the model though. Some stuff will generalize better to FP4 than others.


Definitely a fun read, but the author has an axe to grind with the references to "who gets bitches" and the unnecessary footnotes. Take what you will from it.


There's no literal quote about "who gets bitches" or the word bitch at all in the post though. What you refer to is a paragraph about the distinction between programming and painting (meant as a joke counter-argument to PG's essay about how hacking is essentially like painting), which opens like: "Great paintings, for example, get you laid in a way that great computer programs never do".

The author (who is on HN, btw, too) does go into a more substantial difference of the nature of the two endeavours too, but the whole post is in a jocking tone. And yes, it has "an axe to grind", but it's not about who gets the girl. It's about taking down essays they consider pompous and self-congratulatory.


If you only use it for occasional ios dev, rather get a mac mini. As a bonus, when your done with it, put asahi linux on it and itd be a great home server.


I've considered that and might end up taking that route but the nice thing about a MacBook is I can take it with me on trips and learn iOS when I get bored.

There's a big server rack at home with multiple servers, so the Linux server part isn't a draw in my case.


Memory bandwidth doesn't matter for these types of devices, it is memory latency that matters. But best of all is actually having your application in memory and not having to do disk reads.


Most usual applications (native) runs under 100MB of real memory. If you're insisting on using electron apps from developers that does not care about performance...


Thats great for you I guess. When I work, I want vscode with all the fancy linting and language server bells and whistles. I want to be able to have a ton of chrome tabs for browsing and docs and I want a local version of whatever I am building to run. Im gonna make the bold assumption that this is quite common in the dev community. Given that, 8gb is pushing it, no matter how you spin the "but my memory is fast" or "but native apps"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: