We somehow want a network that is neuromorphic in structure but we don't want it to be like the brain and take 20 years or more to train?
Secondly how do we get to claim that a particular thing is neuromorphic when we have such a rudimentary understanding of how a biological brain works or how it generates things like a model of the world, understanding of self etc etc.
Something to consider is that it really could take 20+ years to train like a brain.
But once you’ve trained it, you can replicate at ~0 cost, unlike a brain.
Yes, but the underlying point is that in this case you can train the AI in parallel, and there's a decent chance this or something like it will be true for future AI architectures too. What does it matter that the AI needs to be trained on 20 years of experiences if all of those 20 years can be experienced in 6 months given the right hardware?
I think we're talking at cross-purposes here. I understand you, but what if the type of learning that leads to intelligence is inherently serial in some important way and can't just be parallelized? What if the fact that it takes a certain amount of chronological time is important? etc
What I'm trying to express is we seem to want to cherry pick certain features from nature but ignore others that are inconvenient and that is understandable, but currently because our knowledge of the biological systems is so incomplete we really don't know which (if any) of these features gives rise to intelligence. For all we know we could be doing training in a seemingly-efficient way that completely precludes intelligence actually emerging.
Secondly how do we get to claim that a particular thing is neuromorphic when we have such a rudimentary understanding of how a biological brain works or how it generates things like a model of the world, understanding of self etc etc.