Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It always cracks me up when people use the word "stupid" to insult other's intelligence. What a pathetically low-effort word to use.


When you’re responsible for supporting people who refuse to receive patches like this one [1], and those same people have the power to page your phone at 11pm on the weekend… you quickly learn how to call a spade, a spade.

[1]: https://patchwork.ozlabs.org/project/ubuntu-kernel/patch/202...


There is undoubtedly a better word than stupid. They're very likely not stupid. Careless, maybe. Inept, maybe. Irresponsible, maybe. Stubborn, maybe. More generously: overworked. Just probably not stupid.


What is the material difference here in the difference between inept and stupid?


A dictionary is an easy way to find out, but in the interest of good faith: stupid is a lack of intelligence, inept is a lack of skills.

To the point: I'd argue ineptitude is both more damning and accurate than stupidity in this particular case.


That wasn't what he used the word for. I understood his point perfectly: there are AI teams that are not knowledgeable or skilled enough to modify and enhance the docker images or toolkits that train/run the models. It takes some medium to advanced skills to get drivers to work properly. He used shorthand "too stupid to" instead of what I wrote above.


Still, it adds an air of arrogance to the whole post. For a while the only pytorch code that worked on newly released hopper GPUs we had was the Nvidia ngc container, not Pytorch nightly. The upstream ecosystem hadn't caught up yet and Nvidia were adding their special sauce in their image. Perhaps not stupidity but lack of docs from nvidia


> For a while the only pytorch code that worked on newly released hopper GPUs we had was the Nvidia ngc container, not Pytorch nightly. The upstream ecosystem hadn't caught up yet and Nvidia were adding their special sauce in their image.

I'm sorry to come across as arrogant, but it's really just frustration, because being surrounded by this kind of cargo-culting "special sauce" talk, even from so-called principal engineers, is what drove me to burnout and out of the industry into the northwoods. Furthermore, you're completely wrong. There is no special sauce, you just didn't look at the list of ingredients. There never has been any special sauce.

NVIDIA builds their NGC base containers from open source scripts available on their gitlab instance: https://gitlab.com/nvidia/container-images/cuda

The build scripts for the base container are incredibly straightforward: they add the apt/yum repos and then install packages from that repo.

The pytorch containers are constructed atop these base containers. The specific pytorch commit they use in their NGC pytorch containers are directly linked in their release notes for the container: https://docs.nvidia.com/deeplearning/frameworks/pytorch-rele...

That is:

25.08: https://github.com/pytorch/pytorch/commit/5228986c395dc79f90...

25.06: https://github.com/pytorch/pytorch/commit/5228986c395dc79f90...

25.05: https://github.com/pytorch/pytorch/commit/5228986c395dc79f90...

25.04: https://github.com/pytorch/pytorch/commit/79aa17489c3fc5ed6d...

25.03: https://github.com/pytorch/pytorch/commit/7c8ec84dab7dc10d4e...

25.02: https://github.com/pytorch/pytorch/commit/6c54963f75e9dfdae3...

25.01: https://github.com/pytorch/pytorch/commit/ecf3bae40a6f2f0f3b...

24.12: https://github.com/pytorch/pytorch/commit/df5bbc09d191fff3bd...

24.11: https://github.com/pytorch/pytorch/commit/df5bbc09d191fff3bd...

24.10: https://github.com/pytorch/pytorch/commit/e000cf0ad980e5d140...

24.09: https://github.com/pytorch/pytorch/commit/b465a5843b92f33fe3...

24.08: https://github.com/pytorch/pytorch/commit/872d972e41596a9ac9...

24.07: https://github.com/pytorch/pytorch/commit/3bcc3cddb580bf0f0f...

24.06: https://github.com/pytorch/pytorch/commit/f70bd71a4883c4d624...

24.05: https://github.com/pytorch/pytorch/commit/07cecf4168503a5b3d...

24.04: https://github.com/pytorch/pytorch/commit/6ddf5cf85e3c27c596...

24.03: https://github.com/pytorch/pytorch/commit/40ec155e58ee1a1921...

24.02: https://github.com/pytorch/pytorch/commit/ebedce24ab578036dd...

24.01: https://github.com/pytorch/pytorch/commit/81ea7a489a85d6f6de...

23.12: https://github.com/pytorch/pytorch/commit/81ea7a489a85d6f6de...

23.11: https://github.com/pytorch/pytorch/commit/6a974bec5d779ec10f...

23.10: https://github.com/pytorch/pytorch/commit/32f93b1c689954aa55...

23.09: https://github.com/pytorch/pytorch/commit/32f93b1c689954aa55...

23.08: https://github.com/pytorch/pytorch/commit/29c30b1db8129b5716...

Do I need to keep going? Every single one of these commits is on pytorch/pytorch@main. So when you say:

> For a while the only pytorch code that worked on newly released hopper GPUs we had was the Nvidia ngc container, not Pytorch nightly.

That's provably false. Unless you're suggesting that upstream pytorch continually rebased (eg: force pushed, breaking the worktree of every pytorch developer) atop unmerged code from nvidia, the commit ishes would not match. Meaning all of these commits were merged into pytorch/pytorch@main, and were available in pytorch nightlies, prior to the release of those NGC pytorch containers. No secret sauce, no man behind the curtain, just pure cargo culting and superstition.


I fully understand. My issue is not with the point, my issue is being too lazy to articulate the point, and instead just saying "stupid."

Address the behavior, not the people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: