Hacker Newsnew | past | comments | ask | show | jobs | submit | ninkendo's commentslogin

The tragic part is how brief the period of time was between “ascii and a mess of code pages” and the problem actually getting solved with Unicode 2.0 and UTF-8.

Unicode 1.0 was in 1991, UTF-8 happened a year later, and Unicode 2.0 (where more than 65,536 characters became “official”, and UTF-8 was the recommended choice) was in 1996.

That means if you were green-fielding a new bit of tech in 1991, you likely decided 16 bits per character was the correct approach. But in 1992 it started to become clear that maybe a variable with encoding (with 8 bits as the base character size) was on the horizon. And by 1996 it was clear that fixed 16-bit characters was a mistake.

But that 5-year window was an extremely critical time in computing history: Windows NT was invented, so was Java, JavaScript, and a bunch of other things. So, too late, huge swaths of what would become today’s technical landscape had set the problem in stone.

UNIXes only use the “right” technical choice because it was already too hard to move from ASCII to 16-bit characters… but laziness in moving off of ASCII ultimately paid off as it became clear that 16-bits per character was the wrong choice in the first place. But otherwise UNIX would have had the same fate.


For a while the brain dead utf32 encoding was popular in the Unix/Linux world.

Exactly: Long live "wchar_t".

I’ll take a stab: because twitter isn’t reality, it’s a microcosm. A tempest in a teapot. It’s something that if you step outside of, you realize it’s not the real world.

Leaving social media can be thought of as emerging from the cave: you interact with people near you who actually have a shared experience to yours (if only geographically) and you get a feel for what real world conversation is like: full of nuance and tailored to the individual you’re talking to. Not blasted out to everyone to pick apart simultaneously. You start to realize it was just a website and the people on it are just like the shadows on the wall: they certainly look real and can be mesmerizing, but they have no effect on anything outside of the cave.


Reddit even more so, thats why you see these 'touch grass' comments littered around.

> Is this a US phenomenon?

The answer to this question is always “no”. Regardless of the subject. Basically 100% of the time.

At my local grocery store everyone returns their carts. In the other place in the US I lived 10 years ago, there were loose carts everywhere.

The US is a very, very big country. Really more like 50 big countries. With huge variation in culture, income, background, etc. There’s barely anything you can say that applies to the whole country, regardless of the subject.


someone clever ought to do some kind of statistical analysis and figure out what hidden variables are causing these differences.

> what hidden variables are causing these differences

Just one variable: the Jerk Scale.


what are you hinting towards?

what are you hinting towards?

what are you hinting towards??

It was never said that it was a phenomenon on the whole country.

It is a US phenomenom yes. When it exists in other countries it's because of Hollywood exporting american culture.


Is that really your position? to you ascribe all bad behavior globally to US cultural exports?

not all bad behavior. why this generalizing frenzy?? lmao

Just seems wild to blame USA for exporting cart culture? What's the mechanism with Hollywood? I missed the marvel extended shopping cart universe.

Where did the USA import cart culture from? Mexico?

It seems likely to me that many countries are capable of domestically manufacturing selfish assholes.


I'm trying to think back to the last time I saw a shopping cart in a movie. I think it was probably Terms of Endearment, but I don't think that the cart made it outside.

Because it doesn’t solve the problem, it merely works around it. Solving the problem would mean coming up with a battery chemistry that doesn’t suffer in the cold. Instead the answer is “just don’t have it get cold”.

It’s not to say a hack/workaround isn’t useful, and I would say that it’s perfectly acceptable to simply use a battery heater in the winter. But calling it “solved” confuses solutions and workarounds, and that’s an intellectually dishonest thing to do.


From the perspective of the user, battery being cold does not matter. How it does not matter i dont care about, for user it is solved.

Now, we can go into the weeds as to what constitutes solved and we might agree or disagree.


> you just have to do maintenance through manual find-and-replace now

Do you? It doesn't seem even remotely like an apples-to-apples comparison to me.

If you're the author of a library, you have to cover every possible way in which your code might be used. Most of the "maintenance" ends up being due to some bug report coming from a user who is not doing things in the way you anticipated, and you have to adjust your library (possibly causing more bugs) to accommodate, etc.

If you instead imaging the same functionality being just another private thing within your application, you only need to make sure that functionality works in the one single way you're using it. You don't have to make it arbitrarily general purpose. You can do error handling elsewhere in your app. You can test it only against the range of inputs you've already ensured are the case in your app, etc. The amount of "maintenance" is tiny by comparison to what a library maintainer would have to be doing.

It seems obvious to me that "maintenance" means a much more limited thing when talking about some functionality that the rest of your app is using (and which you can test against the way you're using it), versus a public library that everyone is using and needs to work for everyone's usage of it.


> If you're the author of a library, you have to cover every possible way in which your code might be used.

You don't actually. You write the library for how you use it, and you accept pull requests that extend it if you feel it has merit.

If you don't, people are free to fork it and pull in your improvements periodically. Or their fork gets more popular, and you get to swap in a library that is now better-maintained by the community.

As long as you pin your package, you're better off. Replicating code pretty quickly stops making sense.


It's a rare developer (or human for that matter) who can just shrug and say "fork off" when asked for help with their library.

It really depends. If it's the occasional request and I can bang out a solution in 30 minutes, I'll help. But I'll also weigh how much maintenance burden it'll be going forward. And if I won't do it myself, I'd always give some quick pointers.

Maintenance demands (your library X doesn't work with Python Y, please maintain another version for me) I'd shrug off. Wait for me, pay me, or fix it yourself.


It would be healthy that it becomes more common, in fact the privately-owned public garden model of the Valetudo project [1] is the sanest way for FOSS maintainers to look at their projects.

[1]: https://github.com/Hypfer/Valetudo#valetudo-is-a-garden


The DMCA is from 1998. I don’t think Larry and Sergei were taking a break from inventing google so they could lobby congress from their Stanford dorm room.


My question: is there a concise theory of game design that properly explains why cutscenes are fucking stupid?

There are a lot of AAA games out there that very clearly seem like the developers wish they were directing a movie instead. Sure, there’s loads of cutscenes to show off some cool visuals. But then they seem to think “ok well we need to actually let the player play now”, but it’s still basically a cutscene, but with extra steps: cyberpunk 2077 had this part where you press a button repeatedly to make your character crawl along the floor and the take their pills. It’s just a cutscene, but where you essentially advance frames by pressing the X button.

Then there’s quick time events, which are essentially “we have a cutscene we want you to watch, but you can die if you don’t press a random button at a random time”, and they call it a game.

If it’s not that, it’s breaks in play where they take control away from you to show you some cool thing, utterly taking you out of the experience for something that is purely visual. I usually shout “can I play now? Is it my turn?” at the screen when this happens.

But I digress… I essentially hate games nowadays because this or similar experience seems to dominate the very definition of AAA games at this point. None of them respect your time, and they seem to think “this is just like a movie” is a form of praise, when it’s exactly the opposite of why I play games.


I worked on an AAA game and the cinematic group had a team that worked in a different location from the main development team, we met a few times very early during preproduction, cue about three years of work, we got the completed videos pretty deep into development (nothing major was going to change in either the cinematics or the gameplay) and after viewing them were wondering what the cinematics had to do with the game with we made, to be fair the cinematics looked very good for the time, but I just plugged them into the game's framework to play at the appropriate point as one of my milestones, but all these videos were skippable after one viewing and I only viewed them completely just to QA the rest of the game when I was ahead of schedule.

I don't think it's a modern thing, I tried playing the original Kingdom Hearts on my PS/2 but gave up because there are so many mandatory videos that are unskippable during combat. Not going back as far, Bayonetta series has a ton of quicktime sequences, that I hate, have to beat an enemy, die to due slow reflexes and unexpected quicktime event, repeat and hopefully get the timing right on button press which is sharp contrast to the otherwise fluid combat in Bayonetta.

There was also at one point in ancient history a very big deal to have cinematics integrate seamlessly into gameplay, using the same engine for both, instead of prerecorded video sequences. So then games did that just as a point of pride, and having the cinematics in game engine it possible for non specialists to add (or storyboard and leaving final result to specialists) cinematics into a game's flow.


I think different people value different things in entertainment. For you, the "cinematic" aspects of the media are worthless - but for others, the whole "interactive cinematic spectacle" is worth it even if it comes at the expense of intractability or the ability to execute skills. Take the COD campaigns for example - notoriously, some of the turret-vehicle-chase-sequences don't actually require any user input to succeed at, but a certain class of player still enjoys them because they're in it for different things than you.


Sounds like you're still bitter over Dragon's Lair and other LaserDisc games.

But like AAA has never been an adjective that meant good or fun. Just that the budget is big.

Cut scenese are an opportunity for a change of pace and to tell the story in a different way. Or as a way to emphasize a game action. When you get a touchdown in Tecmo Bowl, you have a little cut scene which is nice (but gets repetitive). The cut scenes in a Katamari game give you some sort of connection to the world, but you can always skip them.

I think I've managed to skip most big budget games for most of my gaming life. That's fine, lots of other customers for those, I'll stick to the games I like.


Cut scenes can also be a valuable tool for giving information to the player:

- a camera flight go give an overview of the map

- show the location of the final boss

- hint at future missions

- provide a clue for solving the puzzle

- etc.


> is there a concise theory of game design that properly explains why cutscenes are fucking stupid?

Yes. In general it's because they're made by a different team, with different incentives, working to a different schedule.

They're often made using an earlier version of the game lore and story. Due to the massive effort required to make changes and render frames, they often don't match up with late-breaking changes made by the game team.

But sometimes you get lucky and the cinematics team excels. I worked with Blizzard's cinematics team in the '90s, and those spectacular folks produced an amazing body of work.


Half-Life got it right. The cutscene plays but you can still run around and do whatever you want (including not listening).


This isn't the first time I've seen this opinion, and while I share the disdain for quicktime events, and I agree many cutscenes in the most popular games don't work, I don't understand being against the whole concept of cutscenes.

What exactly is the right way to tell a good story though a game? The only other ways I've seen are:

1) Text boxes or Bethesda-style dialogue trees

2) Dark-souls style slow-drip storytelling.

Although they can both work, I don't think I prefer either one over cutscenes. (1) especially is more like something I'll forgive rather than like because I know cutscenes are difficult for smaller teams and limiting for games that emphasize player choice.

It's one of the reasons I liked Baldurs Gate 3 so much -- suddenly the cinematic cutscenes don't feel like a tradeoff for sacrificing choice.


You should play more indie games. Not only are they more gameplay focused, there is an over abundance of great games at bargain prices.

I just picked up Prodeus, if you like games like old Doom and Quake you’ll probably love it.

Also, From Software games (Dark Souls, Elden Ring, Sekiro, Armored Core) are basically all gameplay. Cutscenes are kept to a minimum and gameplay is is tight AF


> But I digress… I essentially hate games nowadays

This is not exactly a new phenomenon. The final cutscene in Metal Gear Solid 4 (2008) is 71 minutes long (Guinness world record). The total cutscenes add up to around 9 hours according to a Reddit user. Maybe more games are doing this now compared to 15 years ago, but I wouldn't bet on it.


Bad take IMO. Cutscenes are fine. Many are beloved, even.

Taking agency away from the player is usually a bad thing, so its not something you want to do when the player has other goals to work on. They are a fine tool to break up the action and games are also about the story and world building so expositional sections are a natural thing.

Its important to not mess with the game pacing, though.

After a heavy boss fight where the player doesn't even know what their next goal is anyway? Perfectly fine time for some exposition.

Running past an NPC on the way to do something? That's a horrible time to whip around the camera and tell the player something.

AAAs have huge momentum so you'll often see plot points and exposition that needs to be shoehorned in to fix some writing issue or what have you. Of course, you also just have game directors making bad decisions.


Agreed. Cutscenes are perfectly fine things to have in a game. Ninkendo is writing like a personal preference (not liking cutscenes) is a universal law of game design, but that is not at all the case.


As a game designer I’ve struggled with the topic of cutscenes and have landed on the side that they are not inherently bad design. Advancement of a story is a form of progression (THE form of progression in a narrative game) and the release of new story beats, or any new content in general, can be used to reward the player. That’s not to say that they can’t be done badly - many are.

The thing about cutscenes, as with most aspects of AAA games, is that they test well in their target market. Cutscenes aren’t exactly cheap to make, especially if acted. They wouldn’t do them if they weren’t popular.

But it’s perfectly fine that you, like many (and me), don’t like cutscenes. Embrace that and accept that perhaps those games aren’t for you, because there is so much choice out there that that you will certainly be able to find things more to your tastes.


Back in the day, I loved the cutscenes Privateer II (starring a very young Clive Owen from Children of Men (I believe) as aforementioned privateer, bless) included, not the ones with any people acting very badly in them, but the rendered cutscenes that played the first time you arrived at a new planet or spaceport, that showed you, hey, this place is a different place.

I played that in my teens, and 30 years later, I can still remember the name of the peaceful agricultural planet that had blimps as their main form of transportation - Bex.

Why? Because the cutscene played and I was like "Wow, look at this place, this is nothing like New Detroit".

And it didn't make you (IIRC) watch the cutscenes. Every. Damn. Time you landed thereafter.


> My question: is there a concise theory of game design that properly explains why cutscenes are fucking stupid?

Two things to consider regarding cut scenes. First, sometimes they are mandated by the game story writers and backed up by artists wanting to show off. Second, and more importantly from a game developer's perspective, they are a useful tool for hiding scene loading I/O such that the customer experience does not notice a nontrivial delay.


> First, sometimes they are mandated by the game story writers and backed up by artists wanting to show off

How is that possibly an excuse? It reads like you’re agreeing with me. I could give a shit less about the feelings of some artist that wants to “show off” to me when they’re getting in the way of me enjoying the maybe 20 minutes of time I have to try and play a game between other obligations.

Games like those seem to be designed for teenagers who have hours or even days on end to sink into a game. I’m a 40+ year old dad with negative time on my hands. The gaming industry has basically left me behind.


I know exactly what you mean. Lots of video games really do feel more like movies these days. Cyberpunk drove me absolutely crazy with all the cut scenes


I think you summed it up yourself, because cutscenes are trying to turn this medium into that of movies.


They're not stupid -- they're feedback. You get them as a reward for having done something, usually.

But they are also not gameplay, obviously.

https://www.raphkoster.com/2012/01/20/narrative-is-not-a-gam... https://www.raphkoster.com/2012/01/26/narrative-isnt-usually...

...and maybe https://www.raphkoster.com/2013/03/13/why-are-qtes-so-popula... since you dislike QTE's. :)


I would be fine if cutscenes were feedback, or reward for gameplay.

But that doesn’t explain games where as soon as you start it up for the first time, there’s a minimum of 20 minutes of (often unskippable) cutscenes before you can even control a character. Or cutscenes at the beginning of a level/mission where you kinda have to watch it to know what’s going on at all, but they’re like 10 minutes long, so you’re gonna be there a while. Sometimes even those ones are unskippable. I remember playing Jedi Fallen Order and I just left the couch and cleaned the kitchen for a while because I could not have given a shit less about the story they were pushing on me, and I came down and it was still going.

Games need to respect my time. You turn the NES on, press start, and there’s Mario on the left side of the screen. You’re playing now. You turn on Forza Horizon 6 or whatever and it’s 20 minutes before you can control a car, at minimum. And that’s a fucking racing game, with no story I would ever possibly give a shit about.


This goes back to the motivations thing. For those who are motivated by narrative stuff, that opening works well. It sets up uncertainties and ambiguities that engage curiosity and prediction.

But you don't like those sorts of problems as much (or don't want them in that moment). Which is fine. No game works for everyone the same way.

(There is also an offhand remark in the article about gamemakers being failed moviemakers... ;) )


> But that doesn’t explain games where as soon as you start it up for the first time, there’s a minimum of 20 minutes of (often unskippable) cutscenes before you can even control a character

I honestly wonder if this is done to reduce returns. Steam, for example has a <2hrs policy.

Put 30+ minutes of cut scene in, 60 minutes of intro/tutorial, and you’re past 2 hours of game launched time before discovering the game itself just isn’t fun for you (too predictable? Grindy? Too easy? Too hard?)


Cutscenes add to the sense of immersion playing the game. I like them, but I also like to skip them if I've already played the game before.


my theory is a there are two camps of "games" (really more of a spectrum from the projection of 2 axes "play" and "art"):

- proper games ("play"): if you remove all the lore, cinematics, dialogs, etc the gameplay can stand on its own and the user find it fun. (ex: Elden ring, Pokemon. you can play a cut-scenes ripped version in a language you don't understand and still enjoy both, chess and other abstract games are the extreme end of this category)

- interactive DVD menus ("media arts"): it's a movie but sometimes you get to interact with it. in this category you have also have visual novels with branching trees/DAGs. they are more than a movie but still ultimately the most important test: they can't stand alone without the story/lore.

I enjoy both, but I wish games and steam pages were more front and center about which camp they are in the beginning before I even buy them.

my ultimate sin is games that think they are in category 1 who give you unskippable cut scenes.


All the AAA games will be inherently fucking stupid almost by design. And this is unavoidable - massive hundreds of millions if not billions in budget -> even if you alientate the bottom 10%, you lose 10% of sales. Bottom 20%, 20% of sales. Not gonna happen.

So you have Legend of Zelda games where pretty much all puzzles are so simple you can instantly tell what the solution is the very moment you see them, ie. downright retarded with few rare exceptions. This also applies to difficulty, etc.

As a result, AAA games can only be appretiated or enjoyed for not much else but production values. The soundtrack, the setpieces, the massive worlds and how much money must have gone into it, etc.


Or God of War. The puzzles almost solve themselves.

Interestingly, Elden Ring (2022) is AAA but very difficult, though not because of the puzzles. Perhaps puzzles test more for IQ (which can't be changed) than for gaming skill.


Don't be like Kid Rock.


I have never disagreed more with a comment on this site.


> The human element is not recorded in the final translation output, but it is important to people that they know something was processed by a human who had heart and the right intentions

Not that I entirely disagree with the conclusion here, but…

It feels like that same sentiment can be used to justify all sorts of shitty translation output, like a dialog saying cutesy “let’s get you signed in”, or having dialogs with “got it” on the button label. Sure, it’s so “human” and has “heart”, but also enrages me to my very core and makes me want to find whoever wrote it and punch them in the face as hard as I can.

I would like much less “human” in my software translations, to be honest. Give me dry, clear, unambiguous descriptions of what’s happening please. If an LLM can do that and strike a consistent tone, I don’t really care much at all about the human element going into it.


Oh I wasn't really referring to tone or language like that, I also don't particularly like it and prefer concise clear language. While LLMs can totally achieve that, I want to know a human decided to do it that way. At some point this mindset is going to look very silly, and perhaps even more so for software. But ultimately it's a human feeling to want that and humans are also not deterministic or logical.


Also, I’m way too lazy to look it up right now, but I’m quite certain I’ve heard of dishwashers that run the hot water for a little bit before letting it fill the basin. Like, I’m pretty sure this sort of thing is commonplace.

It’s not like the engineers for heaterless dishwashers are just too stupid to realize there’s an obvious workaround for having to purge the line before filling the basin. Especially when the performance is so much measurably better when you do it.

Like I said though, it’s a guess. It’s also possible efficiency certifications ding you for the excess water use.


> show that most packages show a slight (around 1%) performance improvement

This takes me back to arguing with Gentoo users 20 years ago who insisted that compiling everything from source for their machine made everything faster.

The consensus at the time was basically "theoretically, it's possible, but in practice, gcc isn't really doing much with the extra instructions anyway".

Then there's stuff like glibc which has custom assembly versions of things like memcpy/etc, and selects from them at startup. I'm not really sure if that was common 20 years ago but it is now.

It's cool that after 20 years we can finally start using the newer instructions in binary packages, but it definitely seems to not matter all that much, still.


It's also because around 20 years ago there was a "reset" when we switched from x86 to x86_64. When AMD introduced x86_64, it made a bunch of the previously optional extension (SSE up to a certain version etc) a mandatory part of x86_64. Gentoo systems could already be optimized before on x86 using those instructions, but now (2004ish) every system using x86_64 was automatically always taking full advantage of all of these instructions*.

Since then we've slowly started accumulating optional extensions again; newer SSE versions, AVX, encryption and virtualization extensions, probably some more newfangled AI stuff I'm not on top of. So very slowly it might have started again to make sense for an approach like Gentoo to exist**.

* usual caveats apply; if the compiler can figure out that using the instruction is useful etc.

** but the same caveats as back then apply. A lot of software can't really take advantage of these new instructions, because newer instructions have been getting increasingly more use-case-specific; and applications that can greatly benefit from them will already have alternative code-pathes to take advantage of them anyway. Also a lot of the stuff happening in hardware acceleration has moved to GPUs, which have a feature discovery process independent of CPU instruction set anyway.


The llama.cpp package on Debian and Ubuntu is also rather clever in that it's built for x86-64-v1, x86-64-v2, x86-64-v3, and x86-64-v4. It benefits quite dramatically from using the newest instructions, but the library doesn't have dynamic instruction selection itself. Instead, ld.so decides which version of libggml.so to load depending on your hardware capabilities.


> llama.cpp package on Debian and Ubuntu is also rather clever … ld.so decides which version of libggml.so to load depending on your hardware capabilities

Why is this "clever"? This is pretty much how "fat" binaries are supposed to work, no? At least, such packaging is the norm for Android.


> AVX, encryption and virtualization

I would guess that these are domain-specific enough that they can also mostly be enabled by the relevant libraries employing function multiversioning.


You would guess wrong.


Isn’t the whole thrust of this thread that most normal algorithms see little to no speed up from things like avx, and therefore multiversioning those things that do makes more sense that compiling the whole OS for a newer set of cpu features?


FWIW the cool thing about gentoo was the "use-flags", to enable/disable compile-time features in various packages. Build some apps with GTK or with just the command-line version, with libao or pulse-audio, etc. Nowadays some distro packages have "optional dependencies" and variants like foobar-cli and foobar-gui, but not nearly as comprehensive as Gentoo of course. Learning about some minor custom CFLAGS was just part of the fun (and yeah some "funroll-loops" site was making fun of "gentoo ricers" way back then already).

I used Gentoo a lot, jeez, between 20 and 15 years ago, and the install guide guiding me through partitioning disks, formatting disks, unpacking tarballs, editing config files, and running grub-install etc, was so incredibly valuable to me that I have trouble expressing it.


I still use Gentoo for that reason, and I wish some of those principles around handling of optional dependencies were more popular in other Linux distros and package ecosystems.

There's lots of software applications out there whose official Docker images or pip wheels or whatever bundle everything under the sun to account for all the optional integrations the application has, and it's difficult to figure out which packages can be easily removed if we're not using the feature and which ones are load-bearing.


I started with Debian on CDs, but used Gentoo for years after that. Eventually I admitted that just Ubuntu suited my needs and used up less time keeping it up to date. I do sometimes still pull in a package that brings a million dependencies for stuff I don't want and miss USE flags, though.

I'd agree that the manual Gentoo install process, and those tinkering years in general, gave me experience and familiarity that's come in handy plenty of times when dealing with other distros, troubleshooting, working on servers, and so on.


Someone has set up an archive of that site; I visit it once in a while for a few nostalgiac chuckles

https://www.shlomifish.org/humour/by-others/funroll-loops/Ge...


Nixpkgs exposes a lot of options like that. You can override both options and dependencies and supply your own cflags if you really want.


This should build a lot more incentive for compiler devs to try and use the newer instructions. When everyone uses binaries compiled without support for optional instruction sets, why bother putting much effort into developing for them? It’ll be interesting to see if we start to see more of a delta moving forward.


And application developers to optimize with them in mind?


According to this[0] study of the Ubuntu 16.04 package repos, 89% of all x86 code was instructions were just 12 instructions (mov, add, call, lea, je, test, jmp, nop, cmp, jne, xor, and -- in that order).

The extra issue here is that SIMD (the main optimization) simply sucks to use. Auto-vectorization has been mostly a pipe dream for decades now as the sufficiently-smart compiler simply hasn't materialized yet (and maybe for the same reason the EPIC/Itanium compiler failed -- deterministically deciding execution order at compile time isn't possible in the abstract and getting heuristics that aren't deceived by even tiny changes to the code is massively hard).

Doing SIMD means delving into x86 assembly and all it's nastiness/weirdness/complexity. It's no wonder that devs won't touch it unless absolutely necessary (which is why the speedups are coming from a small handful of super-optimized math libraries). ARM vector code is also rather Byzantine for a normal dev to learn and use.

We need a more simple assembly option that normal programmers can easily learn and use. Maybe it's way less efficient than the current options, but some slightly slower SIMD is still going to generally beat no SIMD at all.

[0] https://oscarlab.github.io/papers/instrpop-systor19.pdf


Agner Fog's libraries make it pretty trivial for C++ programmers at least. https://www.agner.org/optimize/


The highway library is exactly the kind of a simpler option to use SIMD. Less efficient than hand written assembler but you can easily write good enough SIMD for multiple different architectures.


The sufficiently smart vectoriser has been here for decades. Cuda is one. Uses all the vector units just fine, may struggle to use the scalar units.


I somehow have the memory that there was an extremely narrow time window where the speedup was tangible and quantifiable for Gentoo, as they were the first distro to ship some very early gcc optimisation. However it's open source software so every other distro soon caught up and became just as fast as Gentoo.


Would it make a difference if you compile the whole system vs. just the programs you want optimized?

As in, are there any common libraries or parts of the system that typically slow things down, or was this more targeting a time when hardware was more limited so improving all would have made things feel faster in general.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: