Hacker Newsnew | past | comments | ask | show | jobs | submit | more uluyol's commentslogin

Not sure early versions of rust is the best example of refcounting overhead. There are a bunch of tricks you can use to decrease that, and it usually doesn't make sense to invest too much time into that type of thing while there is so much flux in the language.

Swift is probably a better baseline.


Yeah I was thinking the same thing. "10 years ago the Rust compiler couldn't produce a binary without significant size coming from reference counts after spending minimal effort to try and optimize it" doesn't seem like an especially damning indictment of the overall strategy. Rust is a language which is sensitive to binary size, so they probably just saw a lower-effort, higher-reward way to get that size back and made the decision to abandon reference counts instead of sinking time into optimizing them.

It was probably right for that language at that time, but I don't see it as being a generalizable decision.

Swift and ObjC have plenty of optimizations for reference counting that go beyond "Elide unless there's an ownership change".


In case anyone is interested, V8 pre-ignition/TurboFan had different tiers [1]: full-codegen (dumb and fast) and crankshaft (optimizing). It's interesting to see how these things change over time.

[1]: https://v8.dev/blog/ignition-interpreter


You mean the rockets that are intercepted? Which civilians have been hit, compared to the tens of thousands in Gaza in a few weeks? Those interceptor rockets cost Israel (and the US) much more than it costs Hamas to manufacture.

That's economic warfare. It's like a DDoS attack. Do something cheap that costs your enemy a lot.

The goal is to make occupation too expensive to sustain.


Israel has been begging for the US to get more involved.

Like how they are provoking Hezbollah with their "double-tap" strikes.


Israel and Hezbollah have been in a low-grade state of war for over a decade. If Hezbollah hadn't diverted its energy to the Syrian Civil War, they might by now have been in a state of total war. Hezbollah has units dedicated to infiltrating and attacking targets in Israeli Galilee. My guess is, and I could be wrong, that it probably doesn't make sense to attribute a motive of "dragging the US in" to Israeli strikes on Hezbollah. In many ways, Israeli strikes on Hezbollah are far more ordinary than strikes in Gaza are.


If the purpose of keeping emissions low is humanitarian, then there is a much bigger humanitarian concern at the center of all this.


> there is a much bigger humanitarian concern at the center of all this

If you’re talking about Yemen, sure. If you’re talking about Gaza, it’s naïve to think anything there will restore confidence in the Bab al-Mandab.


The Houthi leadership have repeatedly said that they will stop the attacks if the Israeli assault & ethnic cleansing of Palestine ends.

The real naivete is presuming that the "lalala can't hear you, I will do whatever I want" playbook of US/Israeli foreign policy is sustainable in any way.


Lots of people say lots of things. When it comes to geopolitics, actions and capability are what matter. Until someone is willing to underwrite shipping insurance on the Houthis’ word, their leadership’s promises are worthless.


So the Houthi are admitting that they control the pirates in the Red Sea so strongly that they can order them to stop and they will obey?


Negotiating with terrorists is stupid. Giving in to their demands only empowers them to do it again.


I don't think this applies to any of the incidents mentioned in the article.


In fact, if you don't have enough room to react to "unexpected" behavior, you are at fault lol.


When a human driver must emergency break for a downed branch, it'd ok. When an AI does it, it's unexpected and needs to be hyperanalyzed. I swear, the trolley-car problem is absurd, it's poisoned all debate. 99% of crashes is people not being able to stop in time because people don't drive defensively and can't stop in time when they are called to do so.


>it's unexpected and needs to be hyperanalyzed

There's a good reason for this. It's because the human can be interrogated into what was going through their mind whereas many ML models cannot. That means we can't ascertain if the ML accident is part of a latent issue that may rear its ugly head again (or in a slightly different manner) or just a one-off. That is the original point: a theory-of-mind is important to risk management. That means we will struggle to mitigate the risk if we don't "hyperanalyze" it.


You're missing the context. The AI didn't actually do anything unexpected, unless you expected it to try and drive through a downed branch. The AI behaved exactly as it should. The unexpected part was when the car behind the AI didn't see the branch and, therefore, didn't expect the AI car in front to stop. Unexpected doesn't mean wrong.

Cars can do unexpected things for good reasons, as the AI did in this case.


I'm taking in a larger context. I think just reading the three cited examples is an incorrect approach. For one, Waymo isn't sharing "all" their data, they've already been highlighted for bad practices in terms of only sharing the data from when their Waymo team decided was a bad decision. That's not necessarily objective, and can also lead to perverse incentives to obfuscate. So we don't have a great set of data to work with, because the data sharing requirements have not been well-defined or standardized. Secondly, if you look at reports of other accidents, you can see where AV developers have heinously poor practices as it relates to safety-critical software. Delaying actions as a mitigation for nuisance braking is really, really bad idea when you are delaying a potentially safety critical action. I'm not saying Waymo is bad in this regard, but we know other AV developers are and, when you combine that with the lack of confidence in the data and the previous questionable decisions around transparency, it should raise some questions.


Cory Doctorow - Car Wars https://web.archive.org/web/20170301224942/http://this.deaki...

(Linking to the web.archive version because the graphics are better / more understandable when in the context of some of the text)

Chapter 6 is the most relevant here, but it's all a thought provoking story.


Density is one of the main ways to get cost savings. But there are others too, and there's also a lot of hype around them. Chiplets for example. Or CXL for memory.


Yes, but no customer wants to give Nvidia monopoly money forever either. So like it or not they need alternatives.


> but no customer wants to give Nvidia monopoly money forever either.

From a consumer perspective, I agree. From a datacenter, edge and industrial application perspective though, I think those crowds are content funding an effective monopoly. Hell, even after CUDA gets dethroned for AI, it wouldn't surprise me if the demand continued for supporting older CUDA codebases. AI is just one facet of HPC application.

We'll see where things go in the long-run, but unless someone resurrects OpenCL it feels unlikely that we'll be digging CUDA's grave anytime soon. In the world where GPGPU libraries are splintered and proprietary, the largest stack is king.


I wish I could buy ML cards with Monopoly money.


These stats can be misleading for other reasons as well.

The US and EU have a lot of existing infrastructure, construction of which required enormous amounts of CO2 emissions. Countries that are still building out their infrastructure should be expected to expend more CO2 per capita to catch up. (Modern equipment is more effecient, but still).


> Countries that are still building out their infrastructure should be expected to expend more CO2 per capita to catch up. (Modern equipment is more effecient, but still).

Well, the UAE is not exactly a "country that is still building", so your comment is very misleading. Nevertheless, we can also compare the "share of global cumulative CO₂ emissions" for UAE and USA. It's 0.3% and 24% respectively, and when prorated to the population, we get (0.3 / 10) / (24 / 330) = 0.4, so the UAE is cumulatively about 40% of the USA already, even though their development only started 50 years ago.

And again, if you consider that out of the 10 millions living in the UAE, about 9 million are migrant workers, the vast majority living in dorms, the equation is clear and the UAE current lifestyle is totally unsustainable.

https://ourworldindata.org/grapher/share-of-cumulative-co2?t...


And they ship/externwlize much of their costs to other places. Just a few examples with China - US and EU send their plastic there for “recycling” and probably the majority of goods we use are manufactured in China so they can take the “carbon hit”


China doesn't take the world's plastic anymore, that comment is now 4 years outdated. Since then, some of it was shipped to south-east Asian countries but they started cutting back soon after.

That being said, plastic is a problem in itself, not a "carbon" concern as it is being discussed in COP28 so it is off-topic.

Regarding the "externalize manufacture"/"carbon hit", again, this is a little outdated. It was very true from 2005-2010, but since then China has developed rapidly and is now importing almost as much carbon as it is exporting. See graphs below for "consumption-based CO2 emissions vs. territorial emissions". Except for the EU, it is a matter of ~10% difference.

https://ourworldindata.org/grapher/production-vs-consumption...


great graphic. if you zoom out over all time, the curves seem fairly similar


Yes, the main difference would be that distributed systems have more failure modes to be concerned with and that dictates a lot.


In theory it's the same if your model allows for the scheduler to preempt a thread and never schedule it again.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: