Hacker Newsnew | past | comments | ask | show | jobs | submit | windward's commentslogin

Seems shortsighted (I'm not saying you're wrong, I can imagine Intel being shortsighted). Surely the advantage of artificial segmentation is that it's artificial: you don't double up the R&D costs.

Maybe they thought they would just freeze x86 architecturally going forward and Itanium would be nearly all future R&D. Not a bet I would have taken but Intel probably felt pretty unstoppable back then.

Bjarne's just a guy, he doesn't control how the C++ committee vote and doesn't remotely control how you or I make decisions about style.

And boiling down these guidelines to style guides is just incorrect. I've never had a 'nit: cyclomatic complexity, and uses dynamic allocation'.


A lot of words for a 'might'. We don't know what caused the downtime.

Not this time; but the rewrite was certainly implicated in the previous one. They actually had two versions deployed; in response to unexpected configuration file size, the old version degraded gracefully, while the new version failed catastrophically.

Both versions were taken off-guard by the defective configuration they fetched, it was not a case of a fought and eliminated bug reappearing like in the blogpost you quoted.

Those aren't isomorphic. The C spec says `is_divisible_by_6` short-circuits. You don't want the compiler optimising away null checks.

https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf

6.5.13, semantics


So you claim that the compiler "knows about this but doesn't optimize because of some safety measures"? As far as I remember, compilers don't optimize math expressions / brackets, probably because the order of operations might affect the precision of ints/floats, also because of complexity.

But my example is trivial (x % 2 == 0 && x % 3 == 0 is exactly the same as x % 6 == 0 for all C/C++ int), yet the compiler produced different outputs (the outputs are different and most likely is_divisible_by_6 is slower). Also what null (you mean 0?) checks are you talking about? The denominator is not null/0. Regardless, my point about not over relying on compiler optimization (especially for macro algorithms (O notation) and math expressions) remains valid.


> the order of operations might affect the precision of ints/floats

That's only the problem of floats, with ints this issue doesn't exist.

Why do you write (x % 2 == 0 && x % 3 == 0) instead of (x % 2 == 0 & x % 3 == 0), when the latter is what you think you mean?

Are you sure, that dividing by 6 is actually faster, than dividing by 2 and 3? A division operation is quite costly compared to other arithmetic and 2 and 3 are likely to have some special optimization (2 is a bitshift), which isn't necessary the case for 6.


> That's only the problem of floats, with ints this issue doesn't exist.

With ints the results can be dramatically different (often even worse than floats) even though in pure mathematics the order doesn't matter:

  1 * 2 * 3 * 4 / 8 --> 3
  3 * 4 / 8 * 1 * 2 --> 2
This is a trivial example, but it shows why it's extremely hard for compilers to optimize expressions and why they usually leave this task to humans.

But x % 2 == 0 && x % 3 == 0 isn't such case, swapping operands of && has no side effects, nor swapping operands of each ==.

> Are you sure, that dividing by 6 is actually faster

Compilers usually transform divisions into multiplications when the denominator is a constant.

I wrote another example in other comment but I'll write again.

I also tried this

  bool is_divisible_by_15(int x) {
      return x % 3 == 0 && x % 5 == 0;
  }

  bool is_divisible_by_15_optimal(int x) {
      return x % 15 == 0;
  }
is_divisible_by_15 still has a branch, while is_divisible_by_15_optimal does not

  is_divisible_by_15(int):
        imul    eax, edi, -1431655765
        add     eax, 715827882
        cmp     eax, 1431655764
        jbe     .LBB0_2
        xor     eax, eax
        ret
  .LBB0_2:
        imul    eax, edi, -858993459
        add     eax, 429496729
        cmp     eax, 858993459
        setb    al
        ret

  is_divisible_by_15_optimal(int):
        imul    eax, edi, -286331153
        add     eax, 143165576
        cmp     eax, 286331153
        setb    al
        ret
My point is that the compiler still doesn't notice that 2 functions are equivalent. Even when choosing 3 and 5 (to eliminate the questionable bit check trick for 2) the 1st function appears less optimal (more code + branch).

> in pure mathematics the order doesn't matter

I don't perceive that as an ordering issue. "Pure mathematics" has multiple division definitions, what we see here is the definition you use in class 1: integer division. The issue here is not associativity, it is that the inverse of an integer division is NOT integer multiplication, the inverse of division is the sum of multiplication and the modulo. Integer division is a information destroying operation.

> I wrote another example in other comment but I'll write again. [...]

Yes, this is because optimizing compilers are not optimizers in the mathematical sense, but heuristics and sets of folk wisdoms. This doesn't make them any less impressive.


x % 3 == 0 is an expression without side effects (the only cases that trap on a % operator are x % 0 and INT_MIN % -1), and thus the compiler is free to speculate the expression, allowing the comparison to be converted to (x % 2 == 0) & (x % 3 == 0).

Yes, compilers will tend to convert && and || to non-short-circuiting operations when able, so as to avoid control flow.


Any number divisible by 6 will also be divisible by both 2 and 3 since 6 is divisible by 2 and 3, so the short-circuiting is inconsequential. They're bare ints, not pointers, so null isn't an issue.

So how are they not isomorphic?


That only matters for things with side-effects; and changing the `&&` to `&` doesn't get it to optimize anyway.

You can check - copy the LLVM IR from https://godbolt.org/z/EMPr4Yc84 into https://alive2.llvm.org/ce/ and it'll tell you that it is a valid refinement as far as compiler optimization goes.


A user naively snitching on the project between their 2nd and 3rd posts is a really great bit.

Snitching? Talk about making a tiny email a big deal. Atari already knowing about OpenRCT2 since before the email makes the forcible induced drama even more cringy.

>we are in a very weak economy especially outside of the leading AI firms

Isn't that part of the cause? It sucks up so much investment, there's nothing left for anything else. Or at least nothing without such perceived upside.

Either they pull it off and you're replaced by AGI, or they fail to pull it off and you lose your job to the resulting economic implosion.


> Isn't that part of the cause?

Probably not significantly, IMO.

> It sucks up so much investment, there's nothing left for anything else.

Tariff-inflated input costs combined with weak consumer demand are the reason the rest of the economy is slow, and the reason there aren’t places woth strong and near-term upsides for investment dollars to go. AI being the only thing attracting investment is the effect, not the cause.


My sense is that AI is the one area where boards cannot justify cutting back on investment. If there were no AI boom the rest of the economy would still be getting hammered.

There is still a lot of tech investment, deal making, and hiring going on. It has just left the USA.


You're zooming out and considering this negative sentiment with similar times in the past. I think that's wise. I think we should keep zooming out to other industries. Imagine you're an engineer for GM in Detroit in the 70s - would you consider the mean to be your contemporary middle-class lifestyle, or what it is in 2025? Similar for steel and semiconductors.

It goes for other places, too. Is the US's financial strength of today its mean, or is it where the UK was pre-Suez Crisis? Where Japan was in the 80s?


Let’s hypothetically say we’re all doomed. Say our jobs are going the way of manufacturing in the 70s-80s. What’s the play then?

If I was a new college grad I’d stay away from programming, but that’s been true for a while regardless of offshoring, the job market is just too soft until managers figure out they are killing their senior engineer pipeline and go back to investing in people.

What about the people already in industry? What’s our play?

Live under your means and save as much as possible? Already doing it.

Learn a new trade? Does not feel realistic while working a demanding full time job already, but if things get bad enough, sure.

Use the political apparatus to protect my employment? The system is built to prevent me from doing that. Fighting the system very well could put my employment at risk, which defeats the whole “get what I can while I still can” plan, if I assume doom and gloom on the horizon. I’m also unlikely to actually change anything by taking that risk, so the ROI is horrible.

Is there some other outcome or plan of action here I’m not seeing?


Good question. I've gone with:

>Live under your means and save as much as possible?

which, while obvious, isn't being done by all of us.

A part of me gets angry that collective action was so unpopular thanks to the view that it dragged down those who could excel individually. Every time I see software people act powerless in front of these steamrolling, enormous tech giants that control every facet of our lives, I think about how much power we had - and are on the verge of giving up.

I also try to confront the future, rather than turn a blind eye to it. Can I be happy and find self-actualisation without this identity and financial status? That's a question everyone should think about regardless of what happens.


I love the idea of FIRE as a life goal and driving financial strategy. The core principal is you save enough money up that the dividends from your investment returns (the FI part of FIRE) is enough to live off forever.

If you hit FIRE, awesome, you’re free from ever caring about offshoring or RTO or AI or whatever again.

If you don’t hit it, you’re sitting on a pile of money when a rainy day comes.


You've put that command in quotation marks in three comments on this topic. I don't think it's as prevalent as you're making out.


We've known that since the first assembler.


BSD is given as an example of the cathedral in the book.


indeed, iirc, the 1998 usenix presentation by mckusick et al, seems to be an earlier record of (than the book) these s/w development models.

fwiw, both the gnu project and freebsd champion this (cathedral style of) development model.

however, i don't think linux or bsd is *purely* either approach.

w.r.t `user-facing software` which seems to be central thesis of gp, both the alternates (bsd/linux) offer almost an identical choice.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: