At the moment crowd-sourcing ML training doesn't work very well because updates to the model's weights have to be shared between all of the nodes in the computation periodically. Giant models tend to be trained on clusters with very fast interconnects for this reason.