Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you wish to implement the Torch API from scratch, you must first invent the universe. (Apologies to Sagan)


"from scratch" in this context does not make any sense - Pytorch is already implemented "from scratch" using Python, C++, and CUDA. Implementing Pytorch using Numpy is the opposite of "from scratch".


I didn't dig too deeply, however on the pytorch installation page, it says "Please ensure that you have met the prerequisites below (e.g., numpy)"

Does pytorch already require numpy?


It does, because it can convert a numpy array to a pytorch tensor and back, but I don’t think numpy is used for anything else.


They must be doing something else with numpy as Python has a generic API, that allows direct access to data of numpy array-like structures: Buffer Protocol https://docs.python.org/3/c-api/buffer.html


It’s not just about access to underlying data - users might want to import from, or export to numpy array (e.g. for visualization, etc).


Buffer protocol means PyTorch can have a torch.tensor method, that can create a tensor of anything implementing this protocol (which NumPy arrays do), and likewise numpy.array can be constructed out of PyTorch array because PyTorch Tensor implements this protocol. E.g. they don't need to know about each other (so no dependency), only about the protocol.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: