"...in an environment punctuated by slow, progressive changes followed by cataclistic changes in the opposite direction, individuals that track the enviromment better will overfit (and die). More inaccurate individuals will be the ones surviving long term."
Reading this made me think of the numerous financial firms in history that become very efficient at making money in a particular type of market environment, which inevitably changes suddenly in unexpected ways, causing those financial firms to blow up and maybe even start a financial crisis:
'When Genius Failed' is a terrific book about the ironically-named efficiency-focused 'Long Term Capital Management' hedge fund which was bailed out in the 1998 crisis.
I don't think humans evolve to be inaccurate - I think a better description is that they evolve to not all be the same. If you look at two species, the one that is very uniform is more likely to die out over the long term than the one that has more variation. That variation can be physical, or in the mind and how it makes choices. But this variation makes the species more adaptable when conditions change drastically.
If you look at the long term, in the world that the article describes (slow, gradual with rare cataclysmic changes), most wrong decision makers will still die at a higher chance than correct decision makers. It's just in those rare situations that they survive.
That's a stretch, because it implies intent for diversity. Mass extinction through a lack of robustness is definitely something that natural selection selects for, but the inverse is true where too much genetic variability leads to less reproduction, and a drift towards uniformity over generations.
The bigger assumption in this article is in what natural selection selects for with humans. For sole individuals in any species, you typically need only to live long enough to reproduce viable offspring. With humans, we've evolved intelligence that has lead to a tribal culture.
That means that natural selection doesn't apply to the individual, but the majority of the species. The best individual of the entire human species is enough to hold natural selection back for everyone else (i.e. vaccines, engineering feats, etc.). That doesn't mean we've evolved to making bad decisions, it just means that the collective knowledge of our species is now being subjected to natural selection instead.
"but the inverse is true where too much genetic variability leads to less reproduction"
Two ant colonies A and B. Sugar is abundant in the area and all ants in colony B prefer sugar. 95% of the ants in colony A prefer sugar while 5% prefer peanut butter. The ants that like peanut butter have a higher risk of getting killed because peanut butter is scarce and they must travel further. They also use more energy in getting food. One day a truck drops sugar near the colonies that is poisoned. Colony B is wiped out. Colony A survives on because of the 5% of ants that prefer peanut butter. The queen may die but the surviving ants reproduce.
We all know someone who hates a particular food that most people love. We all know someone who loves a food we think is disgusting. Why don't we all like the same healthiest foods?
I'm not an expert on genes but it's possible that in one species, taste or some other variable is determined by 1 or 2 genes. In another species it may be determined by 8 or 9 genes. This complexity in taste determination may cause more variation in how it manifests. Maybe that complexity causes odd variations to occur over generations. Even as other factors select for sugar in one species, individuals keep popping up that like peanut butter.
The problem with evolutionary behavior hypotheticals is that it's too easy to craft them to fit a narrative - even with yours the inverse is still true.
The next generation of the species is all PB-seeking high risk ants, which expend more energy & die off faster, reproducing less. Diversity shrinks over generations as natural selection picks off most of the high risk individuals, leading to a uniformity.
This still doesn't apply to humans, because even food preferences weren't really subject to natural selection (outside food neophobia in children), because pre-agricultural humans ate whatever they could get their hands on.
I'm not sure which you are using to form your hypothesis, but I find the notion that intelligence is unnatural, or that the application of intelligence is unnatural, or that the products of intelligence are unnatural to be rather unsatisfying. I take it as a capitulation to marketing forces more than any sort of scientific perspective.
Evolution does not understand and does not optimize for the survival and flourishing of a species. There are three reasons why members of a species vary - either the variance is the result of a beneficial gene, or it's the result of random changes to irrelevant genes, or the species is partway through fixing a new beneficial mutation through the entire population.
So the more complicated explanation as to why humans vary is that humans have a specific set of adaptations for learning and filling their place in the social environment. This is the entire point of childhood - trying things, seeing what works well, doing more of it, and winding up a person who does the sorts of things that work well for them. This means that if you happen to be gifted with bad eyesight and good verbal processing, you'll get early successes at storytelling that lead you to that sort of role as an adult.
That, in a nutshell, is why humans have such variance. It's the result of childhood, which is a specific algorithm that genes use to find and exploit the things the hosts happen to be better at.
And this built-in ability to vary oneself is helpful both for the strong and the weak, the pretty and the ugly, the clever and the dull, and so forth. It means instead of over-fitting for behaviors that work for the strong, the trait enacts a strategy that does strong-person behavior in strong people and weak-person behavior in weak people.
Related to this topic: someone (Derbasti) once recommended me [1] "Thinking: Fast and Slow" by economy Nobel-winning Kahneman. This is a book about dozens of ways humans fail to reason properly and a huge part of this book addresses the problem of heuristics we use to estimate risk, probability, costs... absolutely a must-read.
You have some system that tried to balance bias, variance, and utility. Sometimes that best way to do that is to have some bias. The classic example of this is you may jump when you see something that looks like a predator, but then turns out not to be. The cost of being wrong (see nothing when predator, see predator when nothing) means it's optimal to sometimes see danger that isn't there.
That's the argument for racial profiling. Not making any argument for or against, just pointing out something interesting. I didn't think about the topic in this context before, as to whether or not it's rational. It's always only been a question of ethics to me. So the natural follow-up question: are various ethics irrational? Again, only a theoretical exercise, not putting my personal views into this.
Well, it's an argument for racial profiling if certain races actually are more dangerous.
It's hard to answer your question, since both ethics and rationality are fuzzy, slippery concepts that need to be nailed down firmly to reason meaningfully about.
>Well, it's an argument for racial profiling if certain races actually are more dangerous.
When you say "more dangerous" do you mean "inherently more dangerous" or "more likely to commit crimes per capita"? Because if it's the latter, then depending on how you define "race" some of these statistics are already known.
Are various ethics irrational? Quite probably.
Being more specific on which ethics you mean might give more to chew on.
Humans are notoriously great at pattern matching, and seeing things that aren't there simply based on a prior mental model. Stereotypes of all kinds are a result of this.
Ethics are a competing patchwork of conflicting drives.
Evolution tends to select for short-term survival over long-term viability. It's absolutely a mistake to believe the two are aligned.
The reality is that humans are (probably) the first Earth species to be capable of abstract non-physical modelling of the future. But we have a huge amount of behavioural baggage which guarantees that we tend to ignore long-term warning signals in pursuit of short-term gains motivated by evolutionary heuristics.
The heuristics work fine until they don't. Species discover this the hard way all the time.
It would be nice to think we're not one of those species, but the jury is still out on that.
we're certainly capable of creating the most complex models – what's interesting about this very human tendency is that the models begin to take on a reality of their own, replacing our actual observations.
we can see this most clearly in our politicians' insistence on their particular model of the economy as their priority, rather than the actual concerns of the citizens they are responsible to.
If that were so, "left" and "right" would reflect similar inclinations across polities and evolutionarily-short time periods.
They don't.
Or at least, I'd be highly amused to see someone attempt to square the Trump platform with Burkean thought, and that's a relatively smallish difference, both evolutionarily and intellectually speaking, compared to others.
If we compete for resources, sometimes my tribe and yours aren't exactly friendly. Perhaps seeing you (alone or in groups) wasn't exactly a pleasant experience, so I might not like your hair, skin color, eyes format, and so on... Not liking you and not being liked by you might have happened in the past.
And if my group was more aggressive, there are some probability that yours didn't even survived and mine did. Or if I was more "racist", perhaps my genes would spread better. So, some genetic that makes me don't like "different" people might have a role.
If our genes didn't evolve (in that aspect) in some few thousand years, simply because there wasn't enough ambiental/social/whatever pressure in that direction... here we are.
Actually it explains why everyone has innate racial profiling, even members of the target group. It's also why, even if a policy against it is adopted, it is immensely hard to actually change.
It's also the case that violence is so unlikely at any given encounter that it isn't rational to worry about it (in the US, there are billions of stranger-stranger encounters in a given year, millions of incidents of violence).
I guess someone might argue that arranging for encounters that are 99.9% likely to be violence free compared to encounters that are 99.8% likely to be violence free is worthwhile, but their glances probably aren't powerful enough to actually create that difference.
The argument seems essentially similar to theories on the evolution of aging that suggest we age because senescence improves evolutionary fitness when the environment changes on a comparatively short timescale [1]. The world changes, therefore a race to the bottom arises for ways to improve fitness that also happen to make life miserable and short for individuals. Miserable and short outcompetes hydra-like immortality or naked-mole-rat-like negligible senescence in the vast majority of niches.
I do not think decision making is a good model for this concept. Culture is. In particular:
Traditionalism vs. Progressiveism
as cultural influences: the former acts to not overfit current conditions, while the latter tries to fit current conditions. Both influences have been critical for human survival.
Control theory is a better model for Traditionalism vs Progressivism, in my opinion. Traditionalism works to add damping to the system of cultural expectations, preventing overshoot and oscillation, while Progressivism provides the proportional response necessary to track the cultural ideal.
Crucially, this shows the difference between Traditionalism and just having a lot of cultural context. Culture itself is just weight - more of it is harder to change, but it's the same difficulty no matter how fast you go. Traditionalists, on the other hand, will fight you harder the more quickly you're changing cultural facts.
good observation. makes me think also about the tendency of computers and the internet to actually slow us down in our work!
their general-purpose nature means that our minds take on the additional complexity of context switching as we use our tools for multiple simultaneous tasks, and indeed this generality means that they can be quickly adapted to new contexts. contrast this to specialist tools which, once learned, provide significant increases in efficiency in the specific context to which they have been adapted (including the benefits of increased concentration owing to lack of distraction!) but cannot always be re-engineered easily to suit new contexts.
To be honest, it's the fault of current UX trends of making everything so dumb that people can be "proficient" in a program within first few seconds of seeing it for the first time. General-purpose computer is perfectly able to switch between many highly-specialized tools on the fly. It's just that powerful, efficient tools are rarely built nowadays - they've been replaced by pretty looking toys that sell fast.
Interesting argument presented in the paper, though I wouldn't frame it as "we've evolved to be inaccurate": it's really that the world can be so suddenly unpredictable that setting up strong, working paradigms of decision making in the short term can be worse in the long run than just winging it.
It's worth considering, especially in light of the authors' suggestion that we use computer/human decision-making systems to improve performance, as the world is still unpredictable, and can still break our paradigms. The biggest danger of setting up a good system to improve knowledge is that you'll think you've got a perfect one--we could improve our rationality and decision-making with computers for a long time, before an unexpected case cracks the system, and we're left floundering.
maybe a hybrid computer / human solution will fare better
This is the purpose of technology. To enhance our skills. From fire to machine learning, tools are built to make our lives easier and help us make decisions better.
In the end we're better off with more empirical computing in our decision loops. Eventually hopefully we totally replace ourselves with better, more consisitently optimized decision making systems.
While it does make our life easier, machines also brought our down our physical abilities. In modern countries, people are usually not as strong, have less stamina, and their immune system is less potent.
We also see a decrease in the ability to focus with people using a lot devices with screens.
I'm quite concerned of what AI is doing to our brain skills: pattern recognition, memory, data processing and summary... Al that stuff, left untrained, could lead to regression.
We already see a lot of people going to the gym, doing artificial exercice to keep their body in shape. And now we got those popular games to "train your mind" on phones and consoles.
It's a issue IMO, but not an easily solved one. Who don't want to use innovations bringing confort, productivity and increased lifespan ?
A lot of those problems are created by technology because it's designed (as in, on purpose) to create them! There are strong commercial interests to keep you glued to various devices with screens on them. If we can somehow overcome that problem, I'm confident that the distraction/productivity loss issue will simply disappear.
The problem is, what I described doesn't require coordination! It's in the best short-term interest of each company involved to glue you to their product. Therefore, it's the default outcome. Coordination is required to achieve something else.
technology has no purpose. it is a natural phenomenon that arises without any in-built ethics or direction. technology can manifest as a tool (to amplify human potential) or a machine (to replace human labour). the effect of a technology can be influenced broadly, for example by who owns and promotes a technology (open-source vs proprietary models) and which groups in society it is put to use to benefit.
> technology can manifest as a tool (to amplify human potential) or a machine (to replace human labour).
I don't see a material distinction in here. A machine is simply a more effective tool - effective enough to do the work mostly by itself. The "replace human labour" part is a consequence of the economic systems we have.
Yea actually it does, the purpose is whatever the users/builders use it for. So a stick has any purpose that it's user can think for it - starting a fire, hitting an animal to kill it, support for a mud wall etc...
Technology arises out of a sense of purpose from it's user. This is pretty common understanding in philosophy of science.
I think what you say is basically equivalent - technology has no inherent purpose in it besides the one we give to it; that purpose itself is a feature of the human user, not the feature of any given technology (i.e. it doesn't "stick" to an object).
Personally, I see technology as the way to extend the power of our (individual and collective) will to make something happen.
The takeaways:
- The path to success is through NOT trying to succeed
- To achieve our highest goals we must be willing to abandon them
- It is in you interest that others DO NOT follow the path you think is right
What we perceive as massive gains does not necessarily fit the evolutionary model of massive gains. Many types of spiders can catch a humming bird in their web, but they can't eat it.
Put another way, animals have upper bounds on the positive value they receive from risks. In human terms the first billion is worth vastly more than the second.
Although there's so many factors and chance that influence the results of every decision and you can only have so much information and perspective, so you can only do the best you can.
The map is not the territory. We work with models of how the world works when deciding things, not the actual world, so it's bound to not be 100% accurate.
Computers do the same, although they can crunch a lot more data than we can, they still work with models of the world, not the world itself.
These address the question of whether we are inaccurate. The paper behind the article (http://journals.plos.org/plosone/article?id=10.1371/journal....) addresses the question of why: because evolution favors under-fitting in a world with punctuated equilibria.
For survival reasons the brain needed to evolve to make both quick decisions and well thought decisions.
If a predator appears in front of you you might not be able to give a lot of thinking to the decision of what you need to do.
If you are a nomad during the ice age and you need to collect food and prepare a shelter, or track a prey for long distances, you probably need to give it some thought.
I don't know if we're really that inaccurate, or that the complexity of the problem is vastly underestimated. If we think that making accurate decisions is so simply, we haven't we made AI yet?
Reading this made me think of the numerous financial firms in history that become very efficient at making money in a particular type of market environment, which inevitably changes suddenly in unexpected ways, causing those financial firms to blow up and maybe even start a financial crisis:
* https://en.wikipedia.org/wiki/List_of_stock_market_crashes_a...
* https://en.wikipedia.org/wiki/List_of_banking_crises
Society might be better off with financial firms that are "dumber!"