Hacker Newsnew | past | comments | ask | show | jobs | submit | AndyKelley's commentslogin

This is very related to a talk I did last year [1]. "Part 2: youtube-dl" starts at 18:21. It dips toes into an analysis about software that fundamentally depends on ongoing human labor to maintain (as compared to, e.g. zlib, which is effectively "done" for all intents and purposes).

More concretely, the additional Deno dependency is quite problematic for my music player, especially after I did all that work to get a static, embeddable CPython built [2].

Ideally for me, yt-dlp would be packaged into something trivially embeddable and sandboxable, such as WebAssembly, calling into external APIs for things like networking[3]. This would reduce the value delivered by the yt-dlp project into pure DRM-defeating computation, leaving concerns such as CLI/GUI to a separate group of maintainers. A different project could choose to fulfill those dependencies with Deno, or Rust, or as in my case, built directly into a music player in Zig.

Of course I don't expect the yt-dlp maintainers to do that. They're doing something for fun, for free, for pride, for self-respect... in any case their goals aren't exactly perfectly aligned with mine, so if I want to benefit from their much appreciated labor, I have to provide the computational environment that they depend on (CPython[4] and Deno).

But yeah, that's now going to be a huge pain in the ass because now I either have to drop support for yt-dlp in my music player, or additionally embed deno, as well as introduce Rust as a build dependency... neither of which I find acceptable. And don't even get me started on Docker.

[1]: https://www.youtube.com/watch?v=SCLrNqc9jdE

[2]: https://github.com/allyourcodebase/cpython

[3]: https://ziglang.org/news/goodbye-cpp/

[4]: https://github.com/yt-dlp/yt-dlp/issues/9674


> introduce Rust as a build dependency

https://news.ycombinator.com/item?id=45314055

Just like git! This is the present and future. :(


It can be a mixture of both. It's extremely easy to Cover Your Ass while intentionally dragging your feet when a bug works in your favor. The manager simply has to decide that other tasks are higher priority.

Why would any manager prioritize this when it's going to blow over in less than a day, as evidenced by other commentators saying the site is already back up?

Right. I mean, ideally, because regulations have sufficient teeth that the company's existence is jeopardized by having shady business practices. When "it's a bug" is no longer an excuse, they could have avoided such a risk by having customers buy punch cards rather than saving their credit cards, for instance.

This administration is not going to apply said regulations, especially when the said regulations us punishing what they are favoring

Call it HN’s rule: Never attribute to incompetence what can be attributed to malice

The problem with "Hanlon's Razor" is that everything can be explained by incompetence by making suitable assumptions. It outright denies the possibility of malice and pretends as if malice is rare. Basically, a call to always give the benefit of the doubt to every person or participant's moral character without any analysis whatsoever of their track record.

Robert Hanlon himself doesn't seem to be notable in any area of rationalist or scientific philosophy. The most I could find about him online is that he allegedly wrote a joke book related to Murphy's laws. Over time, it appears this obscure statement from that book was appended with Razor and it gained respectability as some kind of a rationalist axiom. Nowhere is it explained why this Razor needs to be an axiom. It doesn't encourage the need to reason, examine any evidence, or examine any probabilities. Bayesian reasoning? Priors? What the hell are those? Just say "Hanlon's Razor" and nothing more needs to be said. Nothing needs to be examined.

The FS blog also cops out on this lazy shortcut by saying this:

> The default is to assume no malice and forgive everything. But if malice is confirmed, be ruthless.

No conditions. No examination of data. Just an absolute assumption of no malice. How can malice ever be confirmed in most cases? Malicious people don't explain all their deeds so we can "be ruthless."

We live in a probabilistic world but this Razor blindly says always assume the probability of malice is zero, until using some magical leap of reasoning that must not involve assuming any malice whatsoever anywhere in the chain of reasoning (because Hanlon's Razor!), this probability of malice magically jumps to one, after which we must "become ruthless." I find it all quite silly.

https://simple.wikipedia.org/wiki/Hanlon%27s_razor

https://fs.blog/mental-model-hanlons-razor/


Assuming incompetence instead of malice is how you remain collegiate and cordial with others.

Assuming malice from people you interact with means dividing your community into smaller and smaller groups, each suspicious of the other.

Assuming malice from an out group who have regularly demonstrated their willingness to cause harm doesn’t have that problem.


From parent's comment

> It doesn't encourage the need to reason, examine any evidence, or examine any probabilities

Parent isn't advocating for assuming malice, or assuming anything really, but to reason about the causes. Basically, that we'd have better discourse if no axiom was used in the first place.


I agree. It seems to be an all too common example of both: 1. lack of nuance in thought (i.e. either assume good intentions or assume malice, not some probability of either, or a scale of malice) 2. the overwhelming prevalence of bad faith arguments, most commonly picking the worst possible argument feasibly with someone's words.

In this case instead of a possibility of it being a small act of opportunity (like mentioned above of just dragging feet) not premeditated, alternatives are never mentioned but instead just assumed folks are talking about some higher up conspiracy and on top of that that must be what these people are always doing.

Anyway thank you for your point it is an interesting read :)


It doesn’t say don’t think about malice as a possibility, it says that if you aren’t going to think about it, you should ignore malice as a possibility.

Yep, "Hanlon's Razor" is pseudo-intellectual nonsense. It sets up a false dichotomy between two characteristics, neither of which is usually sufficient to explain a bad action.

IMHO you're taking it a bit too literally and seriously; I suggest interpreting it more loosely, ie "err on the side of assuming incompetence [given incompetence is rampant] and not malice [which is much rarer]." As a rule of thumb, it's a good one.

To me the more problematic part is anchoring the discussion into rejecting a specific extreme (malice) when there will be a lot of behavior either milder, or neither incompetence nor malice. For instance is greed, opportunism or apathy malice ?

Good point. Basic self-interest is also as likely as incompetence. (shrug)

¿Por que no los dos?

That's because actual malice IS rare. Corporations are not filled with evil people, but people make perfectly rational, normal decisions based on their incentives that result in the emergent phenomenon of perceived malicious actions.

Even Hitler's actions can be traced through a perfectly understandable, although not morally condone-able, chain of events. I truly believe that he did not want to just kill people and commit evil, he truly wanted to better Germany and the human race, but on his journey he drove right off the road, so to speak. To quote CS Lewis, "Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience."


The "malice" part of the razor is bait. People typically act out of self-interest, not malice. That's why anyone who parrots Hanlon's Razor has already lost; they fell for the false dichotomy between malice and incompetence, when self-interest isn't even offered as an explanation.

That's why scapegoating and demonizing people is so bad, it's a way of telling folks that violence can make the world better instead of worse.

What is rare? How is this measured?

Why do incentives result in perceived malicious actions rather than just malicious actions or minor malicious actions?

On top of this no one has said corporations are filled with evil people.



> Corporations are not filled with evil people, but people make perfectly rational, normal decisions based on their incentives that result in the emergent phenomenon of perceived malicious actions.

This rationalization is cope. All US Corporations making "normal" decisions all the time isn't casually obvious. I would say that wherever there is an opportunity to exploit the customer, they usually do at different levels of sophistication. This may mistakenly seem like fair play to someone who thinks a good UI is a good trade for allocated advertisement space, when it's literally social engineering.

Corporations make decisions that more frequently benefit them at the cost of some customer resource. Pair that with decisions rarely being rolled back (without financial incentive), you get a least-fair optimization over time. This is not normal by any stretch, as people expect a somewhat fair value proposition. Corporations aren't geared for that.


Agreed that actual malice is relatively rare (at least, relative to incompetence!). But I feel your take on Hitler is questionable. The question of evil is a tricky one, but I don't think there's a good case to be made that he was only trying to do the right thing. He was completely insane. But leaving aside moral culpability or metaphysical notions of judgment, for any definition of "malice", he embodied it to an the absolute maximum degree.

> That's because actual malice IS rare. Corporations are not filled with evil people,

Corporations don't have to be filled with evil people for malice to be rampant. All it takes is for one person in a position of power or influence who is highly motivated to screw over other human beings to create a whole lot of malice. We can all think of examples of public officials or powerful individuals who have made it their business to spread misery to countless others. Give them a few like-minded deputies and the havoc they wreak can be incalculable.

As for Hitler, if we can't even agree that orchestrating and facilitating the death of millions of innocent people is malicious, then malice has no meaning.

C. S. Lewis has written a great many excellent things, but his quote there strikes me as self-satisfied sophistry. Ask people being carpet bombed or blockade and starved if they're grateful that at least their adversary isn't trying to help them.


Ferret7446:

> Even Hitler's actions can be traced through a perfectly understandable, although not morally condone-able, chain of events. I truly believe that he did not want to just kill people and commit evil, he truly wanted to better Germany and the human race, but on his journey he drove right off the road, so to speak.

Disgusting take. Don't simp for hitler. How am I having to type this in 2025?


I recently heard on a podcast where one of the guests recounted what his father used to say about the employees making cash-handling mistakes in the small store he owned. It was something like, "if it was merely incompetence, you'd think half of the errors would be in my favor."

It probably is a glitch in this case, but it's hard not to see the dark patterns once you've learned about them.


If you short charge a customer they will demand correct change if you overpay a customer won't complain. In the cases of customers giving back extra money it becomes neutral.

His father's theory didn't take into account this.


Incompetence, filtered by customers biased to complain when cheated, and ignore mistakes in their favour?

Hanlon's Razor is for a situation where good faith can be assumed, or the benefit of the doubt given.

When the actors involved have shown themselves to be self-interested, bad-faith, or otherwise undeserving of the benefit of the doubt, it can be abandoned, and malice assumed where it has been clearly present before.


my first manager told my as i started my first oncall "we dont think anybody actually cares about this thing, so if it breaks, dont fix it too quickly, so we can see who notices"

I’m amazed at the prevalence of conspiracy theories on HN in recent years. Even for simple topics like a website crashing under load we get claims that it’s actually a deliberate conspiracy, even though the crashes have turned this from a quiet event into a social media and news phenomenon, likely accelerating the number of cancellations.

COVID years really messed some people up.

You mean like all the people that died? The caretakers in the years after? The medical staff who never got a break? You're right about that.

My comment was not about COVID.

Your comment was:

> COVID years really messed some people up.

You seem to think that you said something different than you did.

If you don't see where your communication broke down, look closely the first word of the quote above. That's you, in case you forgot.


No, what I said was the COVID years. People became dramatically more prone to conspiracy theories and significantly more polarized in the 2020-2025 period. A lot more happened to people than just exposure to COVID, which was of course part of it. I'm not talking about the people who died or the healthcare workers. There was a meaningful step change in the way we interact with each other and what is acceptable. There was a huge impact to the social fabric and cohesion of society.

Using my eyes, I looked back at the text on my screen next to your username.

Your comment was 7 words, one of which is literally "COVID". Then you said you're weren't actually talking about COVID, but you actually meant something about how you think others are now prone to dramatic conspiracy theories.

It seems like you're experiencing some of this yourself or are stuck in some sort of race condition where if someone else doesn't agree with you, it's clearly a them issue. They're the conspiricist.

While explaining that you intended me to get a whole different message from your initial 7 words, you go on to say that while discussing the "COVID years" that...

> I'm not talking about the people who died or the healthcare workers.

Why aren't you focusing on these things? It seems much more important than whatever you are spinning on about the social fabric and cohesion of society as you type into a webform to a stranger about how everyone has conspiracies now.


You see this in video games. Game breaking bugs ? Next week. People can’t buy or use a skin(s) for a weapon? Less than 24 hr fix .

That's true, but it's seldom going to be the case that the account cancellation portion of the app is all on it's own. It's going to be built into the rest of the application, including the parts your happy customers are actually paying for. You're taking down a lot of the site.

And I don't know about others, but the one thing that's sure to make me cancel and never return is when a business tries to be a jerk about subscribers. I know one subscription service that when you try to cancel will instead ask you to pause. Except when you pause, the site will make the buttons to complete a sale begin disabled. Then 10 to 15 seconds later, the button enables. It only does this so that they can show you a request to resume your subscription. Nope. I immediately went and fully cancelled, and I haven't been back. I only intended to pause for a short time because I was unable to use the service at all for several weeks. Instead because they wanted to grasp onto every customer too tightly, and they lost me for good. They didn't respect me, so I don't want their product anymore.


If you think you need libxml2, think again. XML is a complex beast. Do you really need all those features? Maybe a much smaller, more easily maintained library would suit your needs while performing better at the same time!

For instance, consuming XML and creating it are two very different use cases. Zooming into consuming it, perhaps your input data has more guarantees than libxml2 assumes, such as the nonexistence of meta definition tags.


> Do you really need all those features?

"You" probably do not.

But different "yous" need different features, and so they get all glommed together into one big thing. So no one needs "all" of lbxml2/XML's features, each individual needs a different subset.


It's the same as the old joke about Microsoft Word: people only use 10% of Word's functionality, but the problem is each person uses a different 10%.

Of course this is an oversimplification, and there will no doubt be some sort of long tail, but it expresses the challenge well. I'd imagine the same is true for many other reasonably complex libraries, frameworks, or applications.


XML without DTDs is a very reasonable subset that eliminates significant complexity (no need for an HTTP client!) and security risks (no custom character entities that are infinitely recursive or read /etc/passwd!) and would probably still work for >80% of users.

(I wrote such an XML parser a long time ago.)


Why throw out numbers when we all know you haven't actually measured that it's >80%?

In any case, the tooling around XML (DTDs, XPath, XSLT, etc.) is the reason to use it. I would go so far as to say the (supposed) >80% not using those features shouldn't have used XML in the first place.


I agree.. which is part of why I generally dislike using XML for most things.

Not to mention that libxml2 underlies things like nokogiri (the commonly used html parsing gem for ruby), beautifulsoup (python's equivalent), etc.

Pretty sure beautifulsoup uses python’s builtin html.parser but can optionally use html5lib or lxml if installed, and it is lxml, not beautifulsoup, that actually depends on libxml2.

You’re right about nokogiri, though.


Ah, you're right, in the codebase I'm familiar with lxml is used for performance, though it's not the default.

I kinda want something which just treats XML as a dumb tree definition language... give me elements with attributes as string key/value pairs, and children as an array of elements. And have a serialiser in there as well, it shouldn't hurt.

Basically something behaves like your typical JSON parser and serialiser but for XML.

To my knowledge, this is what TinyXML2 does, and I've used TinyXML2 for this before to great effect.


That's what you call a DOM Parser - the problem with them is, as they serialize all the elements into objects, bigger XML files tend to eat up all of your RAM. And this is where SAX2 parsers come into play where you define tree based callbacks to process the data.

The solution is simple: don't have XML files that are many gigabytes in size.

A lot of teleco stuff dumps multi-gb stuff of xml hourly. Per BTS. Processing few TB of XML files on one server daily

It's doable, just use the right tools and hacks :)

Processing schema-less or broken schema stuff is always hilarious.

Good times.


Lol I love the upbeat tone here. Helps me deal with my PTSD after working with XML files.

Depending on the XML structure and the servers RAM - it can already happen while you approach 80-100 MB file sizes. And to be fair, in the Enterprise context, you are quite often not in a position to decide how big the export of another system is. But yes, back in 2010 we built preprocessing systems that checked XMLs and split them up in smaller chunks if they exceeded a certain size.

Tell that to wikimedia, I've used libxml's SAX parser in the past to parse 80GB+ xml dumps.

Some formats are this and they are historical formats.

This process usually goes:

1. "This XML library is way bigger than what I need, I'll write something more minimal for my use case"

2. write a library for whatever minimal subset you need

3. crash report comes in, realise you missed off some feature x. Add support for some feature x.

4. Bob likes your library. So small, so elegant. He'd love to use it, if only you supported feature y, so you add support for feature y.

...

End result is x+1 big, complex XML libraries.

Obviously Im being a bit obtuse here because you might be able to guarantee some subset of it in whatever your specific circumstances are, but I think it's hard to do over a long period of time. If people think you're speaking XML then at some point they'll say "why don't we use this nice XML feature to add this new functionality".


If you want to read some XML quickly, there's always RapidXML and PugiXML, but if you need a big gun, there's libXML.

The former are blazingly fast. In real world they can parse instantly. So alternatives do exist for different use cases.


> Obviously Im being a bit obtuse here

No. This is the first good expkanation for the library hell in linux those days.


XML is used in countless standards. You can't just not use it if you interact with the outside world. Every XML feature is still in the many XML libraries because someone has a need for it, even things like external entities.

Maybe you don't need libxml2 specifically (good luck finding an alternative to parse XML in C and other such languages though), but "I don't like the complex side of XML so let's pretend it doesn't exist" doesn't solve the problem most people pick libxml2 for. It's the de-facto standard because it supports everything you could possibly need.


Exactly. For example if you need to integrate SAML, you have to support a significant subset of several XML specs. It may be possible to write a SAML-only library that supports less, but it's not clear it would be any simpler.

It's common for both the producer of XML and the consumer of XML for any given application to be using a dramatically smaller subset of the standard. Well-engineered software is intentional about this and documents those limitations. Under these conditions it's perfectly valid to use a library that only supports this subset.

Furthermore, those subsets have natural "fault lines", influenced by the burden:utility ratio. This makes consumers and producers naturally coordinate on a subset. It's not like another commenter here said about everyone needing different features.

My argument is therefore that there is value in having different libraries for different subsets - with the smallest subset being much simpler than libxml2.


You shouldn't be down voted, its just the truth no matter how unfortunate.

There is always libexpat, which works very well, also for the streaming case.

Expat is suffering from similar problems: https://github.com/libexpat/libexpat/blob/7643f96bd5b9f5d3b2...

> <blink>Expat is UNDERSTAFFED and WITHOUT FUNDING.</blink> > The following topics need additional skilled C developers to progress > in a timely manner or at all (loosely ordered by descending priority):


Yep, another case of XKCD 2347, unfortunately.

Gratuitous use of XML does sometimes smell like a "now you have two problems" kind of affair.

Combined with time travel it's mind-blowing.

Stumble upon some corrupted memory? Just put a watch point on it, and run the program backwards. Two weeks just turned into two minutes.


What time traveling debugger(s) do you recommend? I'm a regular GDB user and keep meaning to try rr.


Since you're already a GDB user, rr will probably feel comfortably familiar.

https://rr-project.org/ already has enough example usage to get you going on the home page right there.


rr is excellent. the only problem is it doesn't work for Windows


It's a little odd to use GDB on Windows. Have you considered either the Visual Studio debugger, or WinDbg?

WinDbg has time travel debugging, and is arguably 'more excellent': https://learn.microsoft.com/en-gb/windows-hardware/drivers/d...


Super cool project. Sorry if you explained this already, I don't know what "Dijkstra accurate" means. How does it know if an object is truly available to be reclaimed, given that pointers can be converted to integers?


> given that pointers can be converted to integers?

Because if they get converted to integers and then stored to the heap then they lose their capability. So accesses to them will trap and the GC doesn’t need to care about them.

Also it’s not “Dijkstra accurate”. It’s a Dijkstra collector in the sense that it uses a Dijkstra barrier. And it’s an accurate collector. But these are orthogonal things


Hmm, I'm still not understanding the bit of information that I'm trying to ask about.

Let's say I malloc(42) then print the address to stdout, and then do not otherwise do anything with the pointer. Ten minutes later I prompt the user for an integer, they type back the same address, and then I try to write 42 bytes to that address.

What happens?

Edit: ok I read up on GC literature briefly and I believe I understand the situation.

"conservative" means the garbage collector does not have access to language type system information and is just guessing that every pointer sized thing in the stack is probably a pointer.

"accurate" means the compiler tells the GC about pointer types, so it knows about all the pointers the type system knows about.

Neither of these are capable of correctly modeling the C language semantics, which allows ptrtoint / inttoptr. So if there are any tricks being used like xor linked lists, storing extra data inside unused pointer alignment bits, or a memory allocator implementation, these will be incompatible even with an "accurate" garbage collector such as this.

I should add, this is not a criticism, I'm just trying to understand the design space. It's a pretty compelling trade offer: give up ptrtoint, receive GC.


I think the answer in your example is that when you cast the int into a pointer, it won’t have any capabilities (the other big Fil-C feature?) and therefore you can’t access memory through it.


Yes!


To expand on the capabilities thing: https://fil-c.org/invisicaps_by_example

In particular, check out the sections called "Laundering Pointers As Integers" and "Laundering Integers As Pointers".


Thanks!

> This is because the capability is not stored at any addresses that are accessible to the Fil-C program.

How are they stored? Is the GC running in a different process?


Out of curiosity, does this idiom work in fil-c?

https://github.com/protocolbuffers/protobuf/blob/cb873c8987d...

      // This somewhat silly looking add-and-subtract behavior provides provenance
      // from the original input buffer's pointer. After optimization it produces
      // the same assembly as just casting `(uintptr_t)ptr+input_delta`
      // https://godbolt.org/z/zosG88oPn
      size_t position =
      (uintptr_t)ptr + e->input_delta - (uintptr_t)e->buffer_start;
      return e->buffer_start + position;
It does use the implementation defined behavior that a char pointer + 1 casted to uintptr is the same as casting to uintptr then adding 1.


Yeah that should just work

Code that strives to preserve provenance works in Fil-C


Very cool. Hardware asan did not catch the pointer provenance bug in the previous implementation of that code because it relies on tag bits, and the produced pointer was bit-identical to the intended one. It sounds like fil-c would have caught it because the pointer capabilities are stored elsewhere.


What hardware do you need for hardware Asan? I'm so out of the loop that I haven't heard of it before.



Thanks!


Pointers are always integers, which can be interpreted as pointers.


Baseless accusation. Do you by chance have affiliation with a "competing language"?

checks profile

there it is


> "competing language"

Just a consumer. Hope that it is ok to choose or like something else. Please don't be angry. If it counts for anything, fine with things going well for you.

> "Baseless accusation"

No sir, not baseless. At one time, the pocket watching of competitors was real and they getting just $927/month (figure from your .me site and many other places) from their happy fans and loyal supporters seemed too much to bear.

Now that one's personal pocket is fat with $12,500/month, perhaps we can hope to see good will and grace extended to others besides one's self. The world is big enough for creators to be professionally respectful of each other and to allow consumers to choose what they like.


Can we have the source ?


Made an effort to improve that this morning. How's it looking for you now?


For comparison, in the same year Rust Foundation spent $567,000 on this category - more than ZSF's entire expenses for everything. That's 38x more money.

Source: https://rustfoundation.org/wp-content/uploads/2025/01/Annual...


The report says that includes two full-time infrastructure engineers. Which isn’t crazy given Rust infrastructure’s userbase and traffic.

$15k seems pretty lean to me for Zig since it includes hardware purchases.


Hi Andrew - From an PR perspective, I think now that zig have enough attention it may be better to stop doing comparison with or even mentioning Rust.

Rust was hated not because of Zig or any other languages, but their Rust Evangelism Strike Force. Some day these comparison may back fire. Zig can stand on its own now, and already quite widely known. May be best to have peace rather than war.

Just my ( may be useless ) 2 cents.


Rust is not hated. Rust is a widely loved and successful project, growing more popular every year.

There's no war here, only facts that help an ignorant person gain perspective about how much things cost.


Agree with the sentiment, for Zig is continually involved in many other wars (to various degrees), with languages like: Vlang, Dlang, C3, Jai, etc...

Of course comparisons are inevitable or to be helpful, but then let consumers choose what they like and find to be useful. Leadership should not be seen by everyone as in the forefront of throwing gas on the flames, displaying unprofessional behavior, or allow themselves to be known as the face of toxicity.


Agreed. The leaders of Zig should stop bringing up competitors unless specifically relevant.

The RESF became unbearable because Rust leaders quietly encouraged language wars online, and especially offline. Zig should avoid that fate.


Also agree with this. I don’t see rust and zig that similar. People building it, governance behind it, use case and just overall vibe.

Don’t find myself choosing between rust/zig after using them both a decent amount


>That's 38x more money.

Rust gets at least a 1000x more usage than Zig, so their infrastructure costs are not as bad in comparison.


> Rust gets at least a 1000x more usage than Zig

1. I highly doubt your ballpark estimate.

2. I don't think CIs care that much how many users a language has, they care about the number of computations they need to run for each commit/merge.


I don’t think that ballpark estimate is that far fetched? Usage isn’t a reflection of the merits of the two languages. Rust is simply older. It reached 1.0 10 years ago, and it is further along the adoption curve. Zig is yet to reach 1.0 and has mostly early adopters like bun, TigerBeetle and ghostty. I have no doubt that usage will substantially increase once Zig reaches 1.0.

To give you a sense of Rust’s growth, check out this proxy for usage (https://lib.rs/stats). Usage roughly doubled each year for 10 years. 2^10 = 1,024. It’s possible Zig could manage a similar adoption rate after reaching 1.0, but right now it’s probably where Rust was in 2015.

> CIs don’t scale with the number of users

Each Rust release involves a crater run, where they try to compile every open source Rust repo to check for regressions. This costs money and scales with the number of repos out there. But it is true, this only happens once in 6 weeks.

But I think the factor that makes a bigger difference is that Rusts code bases are larger and CI takes longer to run on each commit.


> is that Rusts code bases are larger

And Rust compilations are much slower too.


crater runs are constantly running [1]. Every time there is a PR with any danger of causing a regression a crater run can be requested.

[1]: https://crater.rust-lang.org/


My mistake, sorry!


1000x seems low to me.

Rust is used in production by many companies out there.


In every Zig thread, someone needs to mention Rust /s.


Hello, I am the author of the post.

The expenses listed here are accounting for 100% of the expenses paid by the organization. If you go fetch the 990 from the IRS and look at the totals, it will match dollar-for-dollar, cent-for-cent. So if I deleted taxes from this report, you would hopefully all be wondering, where did that $13,089.07 go?

Happy to answer any other questions.

Edit: I see the question is about income tax vs payroll tax categorization. As this isn't my area of expertise and it's getting late, I'll wait until tomorrow to check carefully and make any necessary clarifications.


i think the question is more of "is that payroll/employment tax"? the way it's written uses the word "income tax" carefully noting the distinction. you may want to edit it to say "payroll tax", which makes more sense.


I think I understand from the other comments. I never considered that it is technically an expense to withhold the income taxes of employees and then pay it to the IRS.


I'm going to go out on a limb and suggest that number is not actually employee income tax, even though the report seems to suggest the same. Employee income tax is an expense of the employee, not the employer. If it is income tax withholding, it's way too small for $150k+ of employee comp, which is another reason I don't think it's that. Instead, I expect this tax line item to be primarily the employer share of FICA tax, which is typically considered a payroll tax instead of an income tax.


there is still payroll tax on top of that though, snd c3s are not exempt


Apple.com advertising a Mac Mini:

> Built for Apple Intelligence.

> 16-core Neural Engine

These Xcode release notes:

> Claude in Xcode is now available in the Intelligence settings panel, allowing users to seamlessly add their existing paid Claude account to Xcode and start using Claude Sonnet 4

All that dedicated silicon taking up space on their SoC and yet you still have to input your credit card in order to use their IDE. Come on...


To run a model locally, they would need to release the weights to the public and their competitors. Those are flagship models.

They would also need to shrink them way down to even fit. And even then, generating tokens on an apple neural chip would be waaaaaay slower than an HTTP request to a monster GPU in the sky. Local llms in my experience are either painfully dumb or painfully slow.


Hence the "come on".


Not if they knew how terrible it would be.


"Apple Intelligence", at least the part that's available to developers via the Foundation Models framework is a tiny ~3B model [0] with very limited context window. It's mainly good for simple things like tagging/classification and small snippets of text.

[0] https://github.com/fguzman82/apple-foundation-model-analysis


Yes, but the Foundation Model framework can seamlessly use Apple's much larger models via Private Cloud Compute or switch to ChatGPT.

When macOS 26 is officially announced on September 9, I expect Apple to announce support for Anthropic and Google models.


I bet Apple are working on it, it’s just not ready yet and they want to see how much people actually use it.

It’s the Apple way to screw the 3rd party and replace with their own thing once the ROI is proven (not a criticism, this is a good approach for any business where the capex is large…)


Local models and any OpenAI-compatible APIs are available to the Xcode Beta assistant. This is just a dedicated “sign in with x” rather than manual configuration.


Trust me, you wouldn’t want to use a model for agentic code editing that could fit on a Mac mini at this stage.


A 128GB Mac Mini M5 would be sweet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: