Hacker News new | past | comments | ask | show | jobs | submit login
No Brainer (2015) (rifters.com)
85 points by Tomte on June 2, 2019 | hide | past | favorite | 14 comments



the scans are actually of a retarded person with hydrocephalus, and are not scans of the near-mythical never-verified anecdote from Lorber (https://www.gwern.net/docs/iq/1980-lewin.pdf) of the 126 IQ guy (Hawks gives a couple reasons why that might not have been as impressive as it seems: http://johnhawks.net/weblog/reviews/brain/development/ten_pe... ).

So the best example doesn't have hard fMRI scan evidence, and the documented cases of hydrocephalus going along with an IQ of 70 or whatever are much less impressive - if you think the brain is doing something, you would expect profound deficits like that!

More interesting is a recent paper: "Miniature spiders (with miniature brains) forget sooner", Kilmer & Rodriguez 2019 https://www.gwern.net/docs/psychology/2019-kilmer.pdf


About that John Hawks's evolutionary argument - "if humans with smaller brains could manage, lesser energy requirements would select for this":

I wonder if maybe having larger brains didn't bring extra survival advantages thanks to having available "spare capacity".

From modern day sports we know how bad head injuries can be (boxing, American football, soccer). If in our evolutionary history a chance of brain damage was common, larger brains could be difference between "brain damaged but still functional" and "brain damaged and dead".

Interesting side-effect could then also be availability of this extra capacity for other functionality in individuals which managed to escape brain damage.


I don't know, the mere fact that someone with brain damage that severe could even be conscious and interact with the world is pretty damned impressive to me. An AI with that kind of systemic damage would present a Blue Screen. The difference between vegetative and able to take an IQ test is far, far higher than the difference between an IQ of 70 and one of 130.


Dropout is regularization technique in deep learning that works little like like that. Randomly selected neurons are ignored during training, typically the probability is 0.5. This means that half of the neurons don't work.

In other words, the network is trained with 50% 'brain damage' to make it learn better and become more robust.


Dropout is not deletion. The neurons turned off will be lit in the next run, and all would make into the model (brain).


There are networks that can be “damaged” (in the sense of having neurons ripped out) and still function reasonably well.


(If anyone is wondering, yes, I did tell Watts about this years ago but he apparently hasn't gotten around to updating it.)


Does this remind anyone else of the lottery ticket hypothesis deep learning paper that was published recently? [0]

I think the gist was that neural networks are usually much larger and more connected than necessary. The paper provided an algorithm to iteratively prune connections by seeing which links in the neural network had the lowest weights.

Here's a quote from the paper:

"The Lottery Ticket Hypothesis: A randomly-initialized, dense neural network contains a subnetwork that is initialized such that—when trained in isolation—it can match the test accuracy of the original network after training for at most the same number of iterations."

[0] https://arxiv.org/abs/1803.03635


Or EfficientNet, SOTA performance with a tenth of the compute: http://ai.googleblog.com/2019/05/efficientnet-improving-accu...

You would hope the human brain would be pretty efficient, given the selection pressure but apparently not.


I mean, it's pretty incredible and coincidental that we evolved brains this sophisticated in the first place, out of a totally random process with selective pressure. After 13 billion years, I mean I think it's reasonable progress, but I wouldn't expect SOTA efficiency. Once you have intelligence, the evolutionary benefits are huge. There's all of a sudden not as much pressure to be efficient (well there was, for like 100,000 years, but then we invented fire and could cook, and that made getting energy/nutrition more efficient. FYI, at this point I'm just quoting from the book Sapiens)


I love Peter Watt's writing. If you haven't read Blindsight I totally recommend it. https://news.ycombinator.com/item?id=18378221


Seconded. I honestly don’t see the world exactly the same after I read it.


> Right now, right here in the real world, the cognitive function of brain tissue can be boosted— without engineering, without augmentation— by literal orders of magnitude. All it takes, apparently, is the right kind of stress.

The implied benefits assume the effect would be cumulative... but perhaps sheer brain mass is not the limiting factor in a body, which is why these cases aren't as detrimental as expected. i.e smaller mass of brain tissue tries the best it can to fill the potential available to it.

As a crude and purely fictitious example: what if something in the brain stem defines how much useful contentedness can be exploited? similar to how a north bridge defines to some degree how useful a "capacity" of CPU is - More is still better, but for many tasks there are diminishing returns past a certain point due to the basic bandwidth limitations.

In this analogy these people would still have a fast bus, but a rather small CPU - but you can still do a lot with that CPU and fast bus if you use them in the right way and were forced to.





Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: