Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Welcome to every single Python ML project - dependency hell will quickly kill any enthusiasm one may have for trying out projects. It really feels archaic to have these issues with such cutting edge technology.


You can blame CUDA quite a bit for that. Proprietary, you need to sort out which driver you need, plus an nvidia GPU...

I tried compiling pytorch with vulkan support, but there are a few LDFLAGS that are wrong. I'll try to solve that some time later.

One piece of advice: use distribution packages! Arch provides pytorch-cuda, and has PKGBUILDS as well.

For reproductibility, I wish we were all on Nix/Guix, but that's not the case (and CUDA+HW dependency would make it complicated).


CUDA is not the problem, the problem is crappy code being released on Github where basic things like requirements.txt are missing, never mind an earnest attempt to provide details about the environment that the code was running on. This is on top of code that has lots of hard-coded references to files and directories, plus also many python libraries just breaking compatibility with each other on point releases.

I can't find a source now, but I remember reading some code where the maintainer had to change a huge chunk of code because the point change for a dependency library literally flipped either how the library handled height/width or BGR channels (I can't remember which one but it was preposterous) from the 2.5.4 to the 2.5.5 version. There is no reason for doing that - it breaks everything just for grins and giggles.

Python itself is also a problem, but that's a rant for another day. Ah, how I wish Ruby had become the defacto language of choice for ML/Deep Learning!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: