For me it's something that execute instructions and have their own instruction sets. It can be even built in a breadboard and you could eventually invent your own instruction set.
Even low powered microcontrollers have a CPU (microcontrollers are small, low powered computers), and microcontrollers can come in many sizes[1]
Your point could be that it's just a head mounted display on top of an ASIC, which I doubt it would be.
Right now I think Glass is just a display for smartphones and a way to use Google services, which I think is quite limited (you said it's useless without the net, I agree). Right now we don't even have the tech to run sophisticated speech recognition in a smartphone without a couple of servers crunching statistical formulas why would you think it would be different with a low powered device?
EDIT: Basically my last paragraph is saying that I agree with you but without being too harsh in the comments. This could be the beginning of the wearable computers revolution along with a iWatch.
Android has offline speech recognition (introduced in Jelly Bean). From my limited testing it works really well. So, I don't think an external server is as required as some people say.
It works insanely fast, transcribing what I say in near real time which feels like black magic compared to Siri on my iPhone that has to record an audio clip in its entirety, send it up to their servers, process it, then send a response back.
Glass could use the Android handset as a remote server--it shouldn't matter if the Android crunches voice on its own or with a data connection, all that matters is Glass getting a reply from the API call.
For me it's something that execute instructions and have their own instruction sets. It can be even built in a breadboard and you could eventually invent your own instruction set.
Even low powered microcontrollers have a CPU (microcontrollers are small, low powered computers), and microcontrollers can come in many sizes[1]
Your point could be that it's just a head mounted display on top of an ASIC, which I doubt it would be.
Right now I think Glass is just a display for smartphones and a way to use Google services, which I think is quite limited (you said it's useless without the net, I agree). Right now we don't even have the tech to run sophisticated speech recognition in a smartphone without a couple of servers crunching statistical formulas why would you think it would be different with a low powered device?
EDIT: Basically my last paragraph is saying that I agree with you but without being too harsh in the comments. This could be the beginning of the wearable computers revolution along with a iWatch.
[1]: http://www.wired.com/design/2013/02/freescales-tiny-arm-chip...