I know computers were slow, and Moore's law was what made really big networks computationally feasible.
Still, the decisive algorithmic breakthrough for the Perceptron was applying BP to MLPs. Without multiple layers you can't solve problems which aren't linearly separable, and without error backpropagation you can't train multilayer networks.
Still, the decisive algorithmic breakthrough for the Perceptron was applying BP to MLPs. Without multiple layers you can't solve problems which aren't linearly separable, and without error backpropagation you can't train multilayer networks.