there seems to be a large group of cs folks who are very out of touch with the fact that we currently are able to build computational systems that replicate regions of cortex (visual cortex, auditory cortex). It's not general intelligence or superintelligence--really dumb animals can see and hear. And there's a lot more to cognition than simply converting external signals into meaningful representations. But I think it's pretty arguable that deep neural nets (or maybe networks of DNNs) DO represent an architecture sufficiently powerful for strong/general AI to develop. And if you can venture far enough to imagine computational systems on non-von-neumann architectures, you might conclude that the level of abstraction that Bostrom is operating around to describe computation isn't total nonsense