Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My version of pytorch didn't have CUDA. I had to install conda to get it, and now it's currently installing.

Whatever the default version that `pip install git+https://github.com/openai/whisper.git` grabbed didn't include it by default.



I installed Whisper (and, I thought all the needed dependencies), and had it running on my M1 Max MacBook Pro with 64 GB ram, but it ran TERRIBLY slowly... taking an hour to do a couple of minutes...

I found this thread and wondered if Whisper was accessing all the cores or the gpu, so I've spent a couple of hours trying to get whisper to access the gpu - following the points made in this thread, and googling how to install via brew the various components.

Long story short, I keep getting an error message

"RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU."

or when I set --device to gpu, it get the error: "RuntimeError: don't know how to restore data location of torch.storage._UntypedStorage (tagged with gpu)"

it's been a looong time since I wrote any code (remember basic?), so realise I may be missing a lot here!!

does anyone have any pointers?

thanks!

edit: I'm now trying it one more time after trying to set the cpu using this line:

map_location=torch.device('gpu')

and I get this message as whisper begins: ~/opt/anaconda3/lib/python3.9/site-packages/whisper/transcribe.py:78: UserWarning: FP16 is not supported on CPU; using FP32 instead warnings.warn("FP16 is not supported on CPU; using FP32 instead")

then I wait for whisper to do it's magic ...tho it looks like it will remain very slow...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: