Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A human is exactly the same. The difference is, once an AI is trained you can make copies.

My kid literally just got mad at me that I assumed that he knew how to out more paper in the printer. He’s 17 and printed tons of reports for school. Turns out he’s never had to change the printer paper.

People know about hiding in cardboard boxes because we all hid in cardboard boxes when we were kids. Not because we genetically inherited some knowledge.



We inherently know that cardboard boxes don't move on their own. In fact any unusual inanimate object that is moving in an irregular fashion will automatically draw attention in our brains. These are instincts that even mice have.


Yep, and humans will make good guesses about the likely cause of the moving box. These guesses will factor in other variables such as the context of where this event is taking place. We might be in a children's play room, so the likely activity here is play, or the box is likely part of the included play equipment found in large quantities in the room, etc.

"AI" is not very intelligent if it needs separate training specifically about boxes used potentially for games and play. If AI were truly AI, it would figure that out on its own.


We also make bad guesses, for instance seeing faces in the dark.


Yes, and when humans make bad guesses it's often seen as funny or nothing out of ordinary. When AI makes bad guesses, it will be seen as a failure of some standard, but with very few people understanding how to fix it. I'm not sure how "allowable" mistakes in the interest of AI learning will be tolerated for AI services used for real-world purposes.

"This Bot is only 6 months old, give him a break". But will people give the Bot a break? Either way, blaming AI will be a popular way to pass the buck.


>We inherently know that cardboard boxes don't move on their own.

No. We don’t. We learn that. We learn that boxes belong in the class “doesn’t move on its own”. In fact, later when you encounter cars, you relearn that these boxes do move on their own. We have to teach kids “don’t run out between the non-moving boxes because a moving one might hit you”. We learn when things seem out of place because we’ve learned what their place is.


Your kid's printer dilemma isn't the same. For starters, he knew it ran out of paper - he identified the problem. The AI robot might conclude the printer is broken. It would give up without anxiety, declaring "I have no data about this printer".

Your kid got angry, which is fuel for human scrutiny and problem solving. If you weren't there to guide him, he would have tried different approaches and most likely worked it out.

For you to say your kid is exactly the same as data-driven AI is perplexing to me. Humans don't need to have hidden in a box themselves to understand "hiding in things for the purposes of play". Whether it's a box, or special one of a kind plastic tub, humans don't need training about hiding in plastic tubs. AI needs to be told that plastic tubs might be something people hide in.


The distinction is that, currently, AI has training phase and execution phase, while a human is doing both all the time. I don’t think the distinction is meaningful now, and certainly won’t be when these two phases are combined.

You are just a neural net. You are not special.


> "You are just a neural net. You are not special".

"Just" a neural net? Compared to these bots following a recipe of instructions at rapid rates, we are indeed special.

We barely even know why people yawn, or dream, or any number of other things. Don't pretend it's all figured out. Don't pretend all we need to do is "tweak the execution phases" to unleash true artificial intelligence. You're reducing human intelligence far below where it actually is.

Another example: The box is painted bright green - unusual for a box. A small child will notice the colour, but not give that fact more weight than it deserves. In other words, the child concludes the box is still a box being used for play, with someone hiding inside.

AI Bot on the other hand, has only been taught about normal brown cardboard boxes. It reaches a different conclusion about the purpose of the green box because it gave the colour too much priority. Humans are special not because of training and execution in parallel, but because of our unique ability to "relax" and move ahead when not all factors are known. We push through, go with flow, "wing it" at varying degrees of success. We take leaps of faith, including micro-leaps in normal situations far more often than any Bot should be allowed to do. That's the special difference, and is why I'm honestly wondering where the ethics debate is while companies rub their hands together thinking about AI profits.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: