Welcome to every single Python ML project - dependency hell will quickly kill any enthusiasm one may have for trying out projects. It really feels archaic to have these issues with such cutting edge technology.
CUDA is not the problem, the problem is crappy code being released on Github where basic things like requirements.txt are missing, never mind an earnest attempt to provide details about the environment that the code was running on. This is on top of code that has lots of hard-coded references to files and directories, plus also many python libraries just breaking compatibility with each other on point releases.
I can't find a source now, but I remember reading some code where the maintainer had to change a huge chunk of code because the point change for a dependency library literally flipped either how the library handled height/width or BGR channels (I can't remember which one but it was preposterous) from the 2.5.4 to the 2.5.5 version. There is no reason for doing that - it breaks everything just for grins and giggles.
Python itself is also a problem, but that's a rant for another day. Ah, how I wish Ruby had become the defacto language of choice for ML/Deep Learning!