Hacker News new | past | comments | ask | show | jobs | submit login
The Difference Between Rationality and Intelligence (nytimes.com)
136 points by frostmatthew on Sept 17, 2016 | hide | past | favorite | 96 comments



I have known many clearly intelligent people who held irrational opinions quite strongly. For example, huge believers in specific, concrete stock market predictions based on some fibonacci sequence nonsense, to the point of actively loosing six figures through multiple failed predictions over a decade+ and still believing.

So clearly, they're distinct.

But the question about the bank teller doesn't seem like the strongest indicator. It seems more like a parlor trick that relies on our "system 1" brain short circuiting before our "system 2" kicks in.

Better indicators would be ones that explore in-tribe vs. out-tribe perception of facts, or that attempted to quantify one's awareness of one's own blind spots.


> the question about the bank teller doesn't seem like the strongest indicator. It seems more like a parlor trick that relies on our "system 1" brain short circuiting before our "system 2" kicks in.

The question about the bank teller is a parlor trick that exploits normal language processing, specifically the Gricean Maxim of Relevance ( https://en.wikipedia.org/wiki/Cooperative_principle#Maxim_of... ). The maxim specifies that if somebody takes the trouble to point something out to you (say, that someone you've never met used to be a very politically-active left-winger), what they pointed out must be relevant to the conversation.

It's basically another exercise in https://xkcd.com/169/ -- the experimenters aren't grasping the lesson that "communicating badly and then acting smug when you're misunderstood is not cleverness".


> if somebody takes the trouble to point something out to you (say, that someone you've never met used to be a very politically-active left-winger), what they pointed out must be relevant to the conversation.

Even if the brain believes that all things that are pointed out are relevant, that still doesn't explain this problem with bad communication.

There are two statements, P and Q, and you have a question of what's more probable: statement P or the conjunction P&Q. The experimenter (communicating badly) makes you believe that Q has a very high probability. However, it is still logically impossible for P&Q to be more probable than P.


I relate the following discussion from memory:

"Long ago", a psychologist did the following experiment on children of different ages: he'd lay out one line of M&Ms with broadly uniform spacing, and another line, longer, with the same spacing, and ask the child "which line has more M&Ms?" After they correctly indicated that the longer line had more M&Ms, he'd adjust the spacing between the M&Ms in the shorter line such that it was now longer. Then he'd ask again, "which line has more M&Ms?" He found that, below a certain age (I hope this finding was approximate), children would indicate the longer (and sparser) line as now having more M&Ms, and above the threshold age they would point to the correct, denser line.

His published papers were viewed as establishing a major threshold in child development, where children started to be capable of assessing reality independently. "Not so long ago", someone decided to replicate this landmark finding.

The twist was, in the replication, instead of asking the children "which line has more M&Ms now?", they said "OK, now you can take one of the lines and eat all of the M&Ms that were in it".

And the threshold effect disappeared completely. Children of all ages chose the line with numerically more M&Ms, regardless of linear extent.

When the problem is your experimental design, you're not warranted in drawing conclusions. You've already demonstrated that you're not qualified to speculate on them.

> that still doesn't explain this problem with bad communication.

> There are two statements, P and Q, and you have a question of what's more probable: statement P or the conjunction P&Q. The experimenter (communicating badly) makes you believe that Q has a very high probability. However, it is still logically impossible for P&Q to be more probable than P.

Your objection only makes sense if (1) human-to-human communication consists of messages that precisely specify their intended meaning, and (2) people believe that (1) is true. (1) and (2) are both false.


FWIW, all human communication is this messy, so the ability to find rational answers even when confronted by this messiness has quite a bit of value.

And if we can be trained to find rationality in the face of inexact human communication, all the better.


This. I think the Linda problem is all based on the fact that when you tell people "which is more likely true ?" they hear "which one is more likely to be the exact description of reality?"

Is not irrationality, just miscommunication.


The problem is that even if people hear it like this, P is still a more likely exact description of reality than P&Q.


Nope! When your goal is an exact description, omitting information is just as bad as being wrong.


>For example, huge believers in specific, concrete stock market predictions based on some fibonacci sequence nonsense, to the point of actively loosing six figures through multiple failed predictions over a decade+ and still believing.

And there's the problem. We don't know what rationality or intelligence really are, but there's an assumption that they have something to do with modelling our current and future environment accurately and reliably.

The subject is a mess of competing notions that aren't necessarily related. Does rationality mean being able to understand and manipulate abstract concepts and symbols using consistent rule sets ("market predictions", "Fibonacci") or does it mean making reliable winning predictions from limited information?

Even the original feminist bank teller example is suspect. If you assume that the set of feminist bank tellers is smaller than the set of bank tellers, then the "rational" answer makes sense.

But it's actually reasonable to read the question as "Given that one particular person is a bank teller - because both choices tell you she is - and has a history of activism, is she also likely to be a feminist?"

One reading is purely mathematical, but the other makes perfect sense in a social context if you assume that you're talking about a real bank teller with a real history.

If you were introduced to someone at a party and given that background information, which interpretation would be more likely to have predictive power?


Yep. Experiments in "rationality" and "irrationality" usually seem to rely on the experimenter rigging up an experimentally normative behavior (a "right answer") which differs, often dramatically, from the ecologically normative behavior (the correct way to act in the subjects' everyday lives). The experimenter then gets to publish an "interesting" paper showing that very clever people behave "wrongly" because they didn't leap to what were actually quite improbable interpretations of events (if you hadn't seen the experiment before).

It makes it sound a lot like the Hollow-Face Illusion (https://en.wikipedia.org/wiki/Hollow-Face_illusion), which we are subject to precisely because our expectations accurately reflect normal life.

We need more experiments on rationality in which the "irrational" behavior is guaranteed not to be Bayes-optimal when given a previous human life.


> people tend to make decisions based on intuition rather than reason.

I think these types of article have a too-narrow definition of rationality. I think that emotions, feelings and intuition are subconscious processes but still part of the faculty of reason. The mistake is to think that rationality = conscious-logical-deduction but anyone who has solved some complex problem knows the experience of working on a problem or days, weeks, months or even years and then one day the solution just hits you, usually when you aren't actually working on the problem.


Are you suggesting that reason include intuition?

I don't believe that would ever be a helpful definition of the word.

---

Reason is conscious; intuition is subconscious.

Of course, they're related: reason can train intuition, and intuition can catalyze reason. But I disagree that a conversational distinction is inappropriate.


I'm not suggesting it, I outright stated it in my comment. My point is that potentially logical connections can (or must) be made subconsciously and that that is an aspect of how the faculty of reason works.

Here is an example you experience every single day and even right now if you type a reply to my comment. If you want to express an idea in words you don't consciously and methodically select words from a dictionary, no. The words come automatically with nary a thought, you just issue the command to summon the words to express your idea.

Without this ability you (and more specifically reason) could not function at all, ergo, intuition and subconscious processes must be a component of reason, reason being our faculty for processing information.


I would think no rational being would argue that conscious psychological phenomena are disconnected from subconscious phenomena. It is eminently rational to note that we do not have a complete grasp of the inner workings of human consciousness and mind.

> Here is an example you experience every single day and even right now if you type a reply to my comment. If you want to express an idea in words you don't consciously and methodically select words from a dictionary, no. The words come automatically with nary a thought, you just issue the command to summon the words to express your idea.

> Without this ability you (and more specifically reason) could not function at all, ergo, intuition and subconscious processes must be a component of reason, reason being our faculty for processing information.

To claim an equivalence between reason and intuition based on that reasoning would necessarily require that you also include irrational conscious behavior. Which I hope will show you that your position is, in effect, an empty proposition that sheds no light on the matter.

Intutitive thought processes and rational thought processes, in my mind, have the same relationship as music and mathematics. Clearly there is a common ground, but they are distinct.

Less whimsically, one may arrive at a decision based on intution that is subsequently validated by events. However, this arrival does not permit a retracing of steps to permit future decisions. Of course it is possible to extract a general law or principle from the experience, but that is not a given.

Reason, on the other hand, is the application of extant, and formulation of new, systemized understanding. However that should not be taken as a measure of the superiority of reason to intution, for it is equally true that intution can at times give us wings to go where reason can not take us.


I think the words we use to think about how the mind works help limit our understanding.

For example, we refer that part of though, or intuition, whatever, that we don't do consciously (what does that even mean) as occurring subconsciously. If I understand correctly when people say 'subconsciously' they are refer to what happens outside of our awareness, as in like you say we don't sit there with a dictionary or a bunch of idea reference books and flick between the pages in order to form a thought.

But why the term subconscious.

sub-

1. a prefix occurring originally in loanwords from Latin ( subject; subtract; subvert; subsidy); on this model, freely attached to elements of any origin and used with the meaning “under,” “below,” “beneath” ( subalpine; substratum), “slightly,” “imperfectly,” “nearly” ( subcolumnar; subtropical), “secondary,” “subordinate” ( subcommittee; subplot).

I conjecture that using the word subconscious causes a linguistic trap that results in us undervaluing what goes on in our minds.

Also I think what most people think of when they think of thinking is merely the subvocalisation[1] or internal dialogue we run in mind. Here I agree with what you've written that, actually all, logical connections are done with intuitive reasoning.

NB I'm a complete armchair amateur at philosophy / psychology / neurology. Of course everyone's got a consciousness so we're all experts. Right.

1. https://en.wikipedia.org/wiki/Subvocalization


It's likely that Descartes would've disagreed with you on this point.

http://plato.stanford.edu/entries/rationalism-empiricism/#1....

    Intuition is a form of rational insight. ....

    The Intuition/Deduction Thesis: Some propositions in a particular subject area, S, are knowable by us by intuition alone;
    still others are knowable by being deduced from intuited propositions.


Good. That makes me even more confident that I am right. I disagree with some of his math so its a draw anyway.


Solid reasoning.


Its Bayesian but the rhetorical trick is that I didn't tell you how much it increased my confidence. ;-)


I agree that those are useful words to distinguish. I also think it's worthwhile to have a thing called "rationality" which maybe means "doing the best possible thing with your brain given your goals," which has to include both of what we're calling reason and intuition [1].

But I wanted to say some words here to flesh out the subject in a little more detail.

"Reason" and "Intuition" may be separate, incomparable types.

I think of Reason sort of like a technical spec -- a description of how a system should work, which leaves the implementation details as an exercise for the reader. I think most people use the word Intuition, meanwhile, as a catch-all to mean "anything that's not consciously executed logical reasoning (and probably not sensory processing either)."

Underlying all these are the implementation details I mentioned I above--the actual "algorithms" our brain is executing. How you actually implement "reason" can make it better or worse, faster or slower, and I've found that people can make significant gains by changing the details of what they are doing with their minds.

More broadly, I think people are capable of having significantly more access to their "unconscious" than most people realize (hence the scare quotes around the word ;)), and further that those nonlogical processes can also be different, and yield better or worse, faster or slower results.

(To justify "significantly more access" a little: consider that 200,000 years ago, no one knew how to do formal logic, but all the required machinery was there between our ears. We did stuff back then that was a primitive type of reasoning just because we had no basis or training in doing better. Today we have decades of training and a culture that supports such things. If you've been educated at all, you've had years of training in reasoning. How much training have you had in introspecting on what we're calling here "unconscious" processing? Probably none or not much! I claim that training in this area can allow a person to have much richer and higher quality "intuitions.")

Anyway, many people sense that the processes they are executing aren't reliable, and some of those come to believe they must rely on what they think of as reason, to the exclusion of their other faculties like emotions. That makes sense, but I think the stronger move is to understand, in detail, what your mind is actually doing, learn what those processes are good for, and where they are weak, and finally change and improve the processes where appropriate.

There's a lot of interesting work going on in this field, and we're just getting started!

[1] I'm the managing director of the Center for Applied Rationality, a nonprofit in the bay area, so I spend an unusual amount of time thinking about this stuff.


Thank you for your reply.

> I agree that those are useful words to distinguish.

Not "words" but "concepts". Words are just the form of our identifications. You could be writing the same concepts (or more generally; ideas) in French (different words) but the meaning remains the same. The precision with which you use your words is a sign of the precision with which you use your concepts.

> More broadly, I think people are capable of having significantly more access to their "unconscious" than most people realize (hence the scare quotes around the word ;)), and further that those non-logical processes can also be different, and yield better or worse, faster or slower results.

A perfect example of my previous comment. The concept "unconscious" should be reserved for boxers knocked out in the ring or a man in surgery under general anesthesia. I like to use "subconscious" for mental processes that I know are going on but I am not consciously aware of. Some of these mental processes can be brought into conscious awareness and some cannot -- I think this is an important aspect of reason. You keep having to use scare quotes because you are groping for this distinction; the subconscious is alive and active and real whereas the unconscious is inactive, dead.

> How you actually implement "reason" can make it better or worse, faster or slower, and I've found that people can make significant gains by changing the details of what they are doing with their minds.

The most creative and powerful ideas occur subconsciously. I inadvertently discovered the interrupted nap trick.[1] I'd fall asleep with my laptop and when it slipped off my belly I'd wake up with new ideas to attack any problems that I was working on.

[1] https://www.fastcompany.com/3023078/leadership-now/how-dali-...


Reason is mostly subconscious like all our mental facilities. That is not a good distinction. That it has good reasoning behind it is the only good distinction.


Emotions are stored logic loops (sometimes they need to be debugged).


"Reason is conscious; intuition is subconscious."

Conscious or not, it can still be reason.

'Hunger' , 'fear', 'lust', 'anger' are all part of the reasonable human calculation.

If you set your mind to just 'conscious' decisions, we'd have been dead a long time ago.

Consider autonomous functions: are you 'reasoning' your heart rate and breathing right now?

And yet, somewhere, somehow, you're 'deciding' to keep your heart at a certain rate while it's warm, and another when you are in need of more oxygen, and another when it's cold.

So autonomous functions are perfectly reasonable.


How would you go about showing that emotions, feelings and intuition produce good results without proving that very idea rationally?

While intuition may be useful to get good results (perhaps it is the subconscious quickly applying some pattern that has worked in the past) there's no way to know a particular use of intuition worked unless you go back later and rationally analyze the results.

And more importantly, there's no reason to believe intuition works at all unless we either 1) understand it on something like a neurological psychological level (I would not hold my breath waiting for this) or 2) do some statistical modeling of it's reliably.


> And more importantly, there's no reason to believe intuition works...

Yes, there is. Humans are alive, they survived the evolution and they used intuition much more frequently then rationality. Logic was invented by greeks less then 3k years ago and before there was no way for rationality to exist. Intuiton does works in most cases. It is the apparent fact.


Very sensible point.

But I would emphasize that in order to justify the value of intuition in this case, you had to resort to reason as well as anthropology and evolutionary biology. That last one in particular most especially is the product of rational empiricism.

So again it appears we can only trust intuition if we can rationally prove that trust is justified.


Forget reason, rely only on intuition, and there will be no 'working' or 'not working'. Stop thinking and end your problems. (Tao Te Ching, chapter 20) Isn't that what we are all after, having no problems?

Is there any point in time where in reality you are more than sensory signals entering your consciousness? Is everything not a mere an illusion of your mind? Letting intuition figuring everything out is effortless, why dwell on illusions?

Thus there is no use for reason.


Reason will give you the math to build the pyramids.

Irrationality will make you think it is a good idea.


Ahh yes, but it is intuition that gave you these words. :)


Intuition and the ability to reason are components of intelligence. But they are distinct. Intuition is certainly not reason (though it could be reasonable) and reason is not intuition (though to some people it comes intuitively).


I agree that thinking of these things in isolation is not that useful. That understanding how they interrelate is as important if not more important than understanding them in isolation. I also agree that rationality and intuition are not the opposites they are describing them as, the opposite of rational has always been emotional. Intuition plays a part both in rational and emotional decisions, it is orthogonal to that dualism.


Intuition can be tuned by the tools of rationality.

Professional poker players infrequently calculate odds consciously, but they have a very strong intuitive sense of probability and game theory. Study and practice allows them to instinctively recognise the expected value of a scenario. Physicists and engineers develop an intuitive understanding of some very complex and counter-intuitive phenomena.


Intelligence is raw processing power. Rationality is correct software. A fast CPU can run buggy programs just as quickly as it can run correct programs.

Unless you're just trying to show off your Passmark scores, you're better off running a correct program on a Raspberry Pi than a buggy program on a quad-Xeon rig.

Many intelligent people waste their brain cells trying to argue that the moon landing was fake, vaccines cause autism, etc. just as often as powerful computers waste electricity running malware. That doesn't make them any less intelligent. They're just really good at executing poorly written software as quickly as possible, which often helps disguise the fact that their software is a pile of crap in the first place.


I would argue that the test was irrational:

"(A) Linda is a bank teller or (B) Linda is a bank teller and is active in the feminist movement. Eighty-five percent of the subjects chose B".

Many people could have assumed that being in set (B) automatically means set (A) = NOT set (B).

Set (B) being more precisely defined, it can easily be assumed that anyone in set (B) is not in set (A). The participants can easily assume that a person can be in only 1 set (no rules were defined).


So what you're saying is the participants went with their intuitive feeling about how the question might be structured as opposed to applying deliberate and effortful analysis/reasoning to understand what was being asked.


Such reasoning would not allow reading the question-asker's mind, so no. You can rationally arrive at either interpretation of the question.


As you yourself pointed out, it the idea that A = !B is an assumption. It's not supported by anything on the page whatsoever. In fact, it's explicitly rejected by what's on the page. Simply reading and reasoning about the answers, which are completely unambiguous, would lead one to conclude that B is a subset of A. Therefore it must be false that A = !B.

There's no mind reading necessary, only rational thought.

Listening to internally-generated assumptions without bothering to fact-check them via explicit reasoning is intuition.


Language is naturally ambiguous and dependent on context. It's not lack of rationality that causes people to interpret questions in different ways. You can only show lack of rationality in a situation like this if they provide the wrong answer and understand the question.


The point isn't that people aren't capable of being rational. If they read the question carefully and think hard about it, of course they'll be able to figure it out. But that takes a lot of conscious effort.

The point is that people aren't often willing to put in that effort, so they default to intuition. What's more, they might not even realize they're doing this.


But there is a huge difference between 'intuition about conversation' and 'intuition about the state of reality'. The former is not tied to rationality as much as practice at fighting through obtuse language. The latter is much closer to telling you whether people are thinking rationally by default.

Compare https://news.ycombinator.com/item?id=12524799 this comment, where the kids already know which pile has more candy, but they don't understand what the weirdly-formatted question is asking.


Interesting experiment. Based on thaumasiotes' description, I don't see any problem with the experimental design, only with the conclusion drawn from it. Something changed between the younger and older kids' ability to understand the question that was asked, and that in and of itself is interesting.

I object to the characterization of these experiments as "communicating badly" or relying on "obtuse language". The questions asked in both experiments are exceedingly straightforward and unambiguous. If "which is longer" and "which is more likely to be true" fall below the bar for designating a question as confusing, then so does every other question imaginable.

Conversation is an integral part of the "real world", and it's almost always far more ambiguous than examples we're examining. Yet we're still expected to parse and decipher conversation in order to survive in society: to thrive at your job, to vote for the best candidate, to get through school, etc.

It's in no way useful to design an experiment that attempts to avoid conversation in a world that runs on conversation. There exist people who can consistently pick the correct answer in all of these experiments, no matter how the question is worded. And there exist people who will get it wrong unless the question is worded perfectly. Who is more rational? Who is better at grasping "the state of reality", as you put it? Who would you rather have on your team at work?


There are interesting things you can ask about why people would understand or not understand a question.

But if they don't understand it, you can't conclude anything at all about whether they know the answer.

It doesn't matter if a question is formatted in a "straightforward" way. Nothing is very straightforward to a small child, and in normal conversation "A or A^B" is usually a mistake.

Honestly, people say things they don't mean all the time. A literal interpretation of conversation has a lot of potential to hurt you. It's also irrational. The rational goal is to figure out the most likely intended meaning(s). It's not to figure out what the meaning would be in a counterfactual world where people use language in a more logical way.

> There exist people who can consistently pick the correct answer in all of these experiments, no matter how the question is worded. And there exist people who will get it wrong unless the question is worded perfectly. Who is more rational?

If people figured out that the experimenter was asking a nonsensical and purely logical question, then that's a valuable skill. But it's not really connected to whether they are generally rational or irrational. It's also not connected to whether they understand basic probability.

> Who is better at grasping "the state of reality", as you put it?

In this experiment, you can't tell. When it comes to the probabilities about Linda, people's understanding is a total mystery if they didn't know what the question wanted.

You can figure out how they interpret questions on one axis, but you would need other tests to figure out why. They might be more rational, or they might be more literal, even to an irrational level.

> Who would you rather have on your team at work?

Depends on why they got the answer right.


I think we disagree on two big things, and the rest is fluff.

First, I still don't think it's fair to characterize the questions ("which is more likely" and "which has more M&Ms") as "nonsensical and purely logical." I wouldn't even call the questions atypical. We contend with simple questions like this all the time in society. The intent behind both questions is unambiguous, and there are no reasonable alternative interpretations, unless you assume the question asker is simply mistaken.

Second, we're on different pages about what the experiment is evaluating. I don't claim that the people who answer incorrectly are less capable of rationality, nor that they don't understand probability. What I believe is that they are less likely to recognize when it's best to apply their logical toolset. As a result, they're more likely to use intuition to answer questions that are best answered via careful analysis. This appears exactly like failing to understand the question itself.

To wit: The younger kids used the length of the line to answer "which has more M&Ms", but the older kids immediately recognized that it's better to use a more in-depth analytical tool: counting. I hypothesize that the younger kids are simply not as good (yet) at knowing when to apply this tool. Intuition is easier and less effortful, thus it's the default unless we make ourselves think harder. This pattern extends to adults, too. How many people think we should be tougher on crime because it "feels" like there is more violence today than ever before, yet don't even consider that they need to look at some actual numbers to justify this conclusion?


It's specifically a question of "which is more likely, A or A^B" that is weird. The typical "which is more likely" has non-overlapping categories. It doesn't even need to be a mistake. If I ask whether you want a sandwich or a burger, it's obvious that 'sandwich' actually means 'non-burger sandwich'.

I'm claiming that they are not using intuition instead of analysis. Or at least, you can't tell that from their answers. There are logical reasons to interpret the question as non-overlapping sets. They could be performing a very careful analysis and still pick the second option.


I only disagree with your last sentence. There's no way to perform a careful analysis of option A vs option A+B while maintaining the assumption that they are non-overlapping sets. Simply reading the answers proves that assumption wrong. Thus, it seems way more likely that the explanation is a lack of thinking/analysis... i.e. intuition. People are answering the question they expect instead of the question that's in front of them, because that's easier: http://lesswrong.com/lw/9l3/the_substitution_principle

But you are right, we can't know that from their answers alone. There are numerous possible reasons for picking the wrong answer, so further experimentation is required if we really want to know why.


If this interests you, check out https://intelligence.org/rationality-ai-zombies/.


Yudkowsky, the man who rejects science for Bayesian Reasoning, would like to inform everyone that high school never understood him, and has Absolute Faith in the truth of his utility functions...

Also, the strange conviction that logic just stopped changing in the 1800's.


If that interests you, ask for a chart of the real-world successes of LessWrong readers compared to non-LW readers.

Also about the mosquito nets versus AI dialectic in Effective Altruism.


As a LessWrong reader I agree that real-world success should be the defining metric, but want to point out that you can't just compare "LessWrong readers to non-LW readers" - if LW readers are not successful IRL, then it's entirely possible that there's some personality factor that both makes you not successful and makes you read LW.

Probably the thing to measure is LW vs. other productivity blogs with a similar audience.


> Also about the mosquito nets versus AI dialectic in Effective Altruism.

There's a reasonable debate here about short-term and long-term thinking. "How to save the most lives the most quickly" is not the only reasonable altruistic goal to optimize for. "How to save the most lives over the long-term, and save them permanently rather than just delaying a current inevitable" is worth consideration. And there are already far more people willing to support mosquito nets, and far too few willing to support AGI research, or SENS for that matter.


"AGI research" in practice means "give money to MIRI", whose track record of results on pretty much any measure is less than impressive.

It is really (and literally) "donate to stuff that demonstrably works" versus "donate to MIRI, with its terrible track record, to do something supported primarily by Pascalian arguments."

c.f. http://www.vox.com/2015/8/10/9124145/effective-altruism-glob...

Yudkowsky may have (probably) coined the phrase "effective altruist", but people who aren't living sci-fi dreams are in EA now, and asking rather pointed questions.

Never confuse "Effective Altruism" and "altruism that is effective". Whatever "effective" actually means in the given context.

Getting back on-topic, there's still no evidence - e.g., a track record of results - that anything within a mile of MIRI/LW is actually any good at all for real-world effectiveness, and things like mosquito nets versus AI as evidence against it. LW sells a sort of "rationalitiness": it sure feels like rationality.


I wonder sometimes whether rationalization came before logic. Perhaps using logic in a rigorous way (as in a mathematical proof) evolved out of persuasive techniques that are intended to get people to believe you, regardless of whether you're right?

Looking for logical flaws in arguments might have arisen as defense against persuasive techniques.


You'd probably be interested in the research of Hugo Mercier and Dan Sperber on this very point: http://www.nytimes.com/2011/06/15/arts/people-argue-just-to-... People Argue Just to Win, Scholars Assert - The New York Times


This is really interesting to me. I've always associated rationality with intelligence. Naturally this leads me to wonder how people with high intellectual aptitude have strangely inconsistent opinions.

But after reading this article, intuitively it makes sense to me. I'm eager to read the linked study for further study. I feel as though the researcher's holistic approach to quantifying rationality (decoupling rationality into IQ and "RQ") is more precise than assuming the two correlate together.

One of the things I'd be very curious about, assuming this study reproduces well, is whether or not it can be reproduced for other aptitude tests. We already have EQ, which will make three measures if this catches on. Can we keep decoupling traits from intelligence to measure them more granularly and accurately? What else could we move into its own category?


The capacity to abstract thought and the decision to classify it are two different concepts, but easy to conflate. As we form classifications our mylenic sheaths form around our neurons and abstracting concepts - even if apparent to others - becomes more challenging.


> "mylenic sheaths form around our neurons and abstracting concepts"

Roll me one while you are at it


That is not how that "and" works.

'Mylenic sheaths form around our neurons. -and- Abstracting concepts becomes more challenging.'


The "and" wasn't the point.

I don't see much in the literature regarding retardation of abstract thought due to the formation of myelin.

Quite the opposite.

https://en.m.wikipedia.org/wiki/Myelin

"Myelin is a fatty white substance that surrounds the axon of some nerve cells, forming an electrically insulating layer. It is essential for the proper functioning of the nervous system."

Further

"Myelination continues through the adolescent stage of life."

And finally

"The main purpose of a myelin layer (or sheath) is to increase the speed at which impulses propagate along the myelinated fiber"

I'd say myelin is a pretty good thing.

Pat Wolfe, in "Brain Matters: Translating Research into Classroom", goes on...

https://books.google.nl/books?id=BzccMxS56LAC&pg=PA79&lpg=PA...

"Neurons in the frontal lobe receive a myelin sheath late in development, as evidenced by the increased ability of children to engage in higher-level and more abstract thinking."

I'm really stretching to understand how to interpret the conjunctive here


Ty


Maybe I missed your point, but as far as I am aware myelination is a very good thing.


Mylenation is a double edged sword. We see in judo subjects the dramatic increase in reflexes and competetive advantage after substantial practice. On the other hand similar repetition in other areas can lead to difficulty construing other pathways. I also believe mylenation is responsible for the gradual decrease in appreciation of ones favourite songs.

In other words it can make things faster, but limit capacity for change and reduce broader neurological firing.

Sorry I haven't explained it very well - on the run with a phone.

I would strongly encourage a read of Pocket Guide To Interpersonal Neurobiology by Daniel Siegel, who touches on the neurobiology and broader aspects of the impact in an accessible narrative.

The kernel of what I seem to recall driving my original comment was that mylenation occurs at deep levels, during what we call the formative years, but if not adequately developed people can be or become susceptible to various negative psychological traits such as addiction, susceptibility to brainwashing, poor decision making, lack of critical thinking. As Max Planck put it - people don't change their mind to accept an idea, rather people die and everyone that's left just accepts the idea as true. A large part of what he was getting at is, I believe, a direct consequence of mylenation - namely the formation of faster but hard to change pathways in the brain.

I hope that explanation helps.


I often muse on the difference between the

    logical
    rational
    sensible
    wise
Throw in

    freethinker
    humanist
    skeptic
    atheist
and a few more.

In too many cases, people conflate all of these.


Credit Suisse made a write up on some of the elements of RQ.

https://doc.research-and-analytics.csfb.com/docView?language...

It's based on a confidence calibration test available here confidence.success-equation.com


I wish there were more info about this computer training to minimize cognitive bias. Wouldn't mind trying that out.

The distinction between rationality and intelligence (I usually think about it as rationality and logic) is an important one that is rarely made. Economists especially do a bad job of this, despite being the sort of people who should know better. Thanks to cognitive biases, limited information, and costs associated with getting past those two hurdles, people are rarely logical. But we are typically rational; we make the best decision available given our beliefs, evolved instincts, limited available information, etc. That can result in illogical behavior, but hardly unpredictable/erratic behavior. Understanding that people are maximizing utility based on a function that includes lots of tricky psychological factors--not just pure logic--is helpful in empathizing with others.


Rationality is logical behavior.


Interesting. Had not realized rationality was learnable.

Another leading contributor to opinions that others may consider to be irrational is existing beliefs. I am not a scientist but I am guessing belief systems are a bigger factor than irrationality in most cases where we generally label others as being irrational or unintelligent.


Where can I get my hands on this computer training?


I don't think they should call it "rationality", it's kinda loaded term. I think better would be to call it "critical thinking".

Edit: Not sure why I was downvoted. I referred to this part:

"R.Q. would measure the propensity for reflective thought — stepping back from your own thinking and correcting its faulty tendencies"

That's almost a perfect definition of what critical thinking is - being critical to your own thought.


Is that game available for download? (Missing: the pursuit of Terry Hughes)



I'd draw a distinction between intellect and intelligence. It's quite possible to have a powerful intellect and still make poor decisions, i.e. be stupid.

Intelligence is about making good decisions. Intellect can aid in that, but it's not the only factor, and after a certain level it wouldn't even be the primary factor.


Irrationality rules the world.


The article presents the following experimental premise:

  “Linda is 31 years old, single, outspoken 
   and very bright. She majored in philosophy. 
   As a student, she was deeply concerned with 
   issues of discrimination and social justice, 
   and also participated in antinuclear 
   demonstrations.” 

  Then they asked the subjects which was more 
  probable: 

  (A) Linda is a bank teller 

  or 

  (B) Linda is a bank teller and 
      is active in the feminist 
      movement.
This example is bullshit, for several reasons.

We have several known personality attributes provided in the example, and all of them relate to the subjective opinions of an individual and are designed to provide belief in the potential for associations with similar political alignments.

Then we're provided with 2 choices, and each choice couples a previously unknown detail to the individual described, regarding occupation, with no opportunity to exclude the occupation.

We are asked to make an assumption about the individual, based on previously provided information, and guide the formation of our assumption with intuition.

According to The Letter Of The Law, the example asks the respondent to parse probability ONLY, and then penalizes according to the transitive property of equality, because technical interpretations of probability state that, when information has not been previously presented, an individual trait in isolation, is more probable, than a coupling of rare traits.

  Which one is more probable? Oh wait, you're 
  wrong because you misinterpreted our 
  context-sensitive definition of the word 
  "probable." You lose.
According to The Spirit Of The Law, the example appears to present the respondent with a set of details, and prompt the respondent with a request to parse the details and demonstrate a display of reading comprehension, such that they show they've observed the relevant details, and drawn a conclusion by associating multiple cultural norms of common political alignment.

  With the presentation of choice B, and
  based on other personal details about
  Linda, do you feel it likely that Linda 
  could be a feminist? Keep in mind that 
  we've offered clues to her political 
  alignment, and these may play a role 
  in the correct answer.
The example reads as:

  Given: [0, A, 2, C, 4]

  Is [E] likely?

  or

  Does [E, 6] make more sense?

But claims to present:

  Given: [0, A, 2, C, 4]

  Which is most probable?  

   > [$]

  or

   > [$, 99]
So, the example is an experiment in providing a loaded question, and then changing the context of expected interpretation, and then declaring proof that people are prone to misinterpretation.

The example is like asking someone if they were happy about who won The World Series, and then telling them you're not inviting them to a soccer game, because of their opinions on baseball.

The example the authors have provided is designed to promote assumptions, without providing adequate context for expectations, and that is fucking stupid.


Your reaction is exactly the reason the question was constructed that way. A lot of the available information would lead you to conclude that the provided details were important and that they support a specific option being more likely.

But if you rationally consider the options, it's apparent that option B can't possibly be more likely than option A, regardless of the information presented, because option B is by definition a subset of option A. It is not possible for Linda to fit option B but not option A, so option B can't possibly be more probable.

The fact that it's a leading question designed to promote assumptions is not a flaw; it's the whole point of the experiment. Even intelligent people are supposed to be led to the wrong conclusion because they try to analyze all the available information. But rational people are supposed to recognize that the presented information is irrelevant and that they can pick the right answer even if they don't know anything about Linda.

In the interest of full disclosure, I'll mention that I had exactly the same reaction regarding the quality of the question. It was only after some consideration that I realized this may have been intentional on the part of the people conducting the experiment.


Some amount of people probably assume that asking "What's more likely, A, or A and B" intends to ask for a comparison between A^~B and A^B, not simply A and A^B, in which case it would be an error in communication rather than an error in rationality.


The premise of the experiment is akin to considering whether or not people are prone to being swindled by a three card monte con-game.

The premise of the example only demonstrates a susceptibility to a situation where the individual is not expecting to be judged based on technicalities.

Technically, in a three card monte game, on the street, you have no assurance that the dealer is operating the deck with integrity. Technically, on the street, you have no assurance that other players are not collaborating with the dealer.

Does this prove that humans are often innately irrational? Maybe insofar as any other parlour trick does.

The claim that a bystander should know that their own capacity for estimation of the likelihood that Linda's occupation may be bank teller shall be poor, is masked, in terms of relevance to the rest of the context of the presented scenario. A bystander's estimation of feminist alignment is anticipated and intended.

When the bystander chooses option (B), an assertion that they have no insight into whether Linda was a banker or not, suddenly becomes the defining aspect of the test.

So now we've proven that given an unexpected context, a bystander is surprised by a sudden ambush within that context.

While such nuances may be interesting on a much grander scale, in most cases, the experiment is not framed that way, as a design to misdirect the individual, and certainly, the authors of this op-ed article make the same mistake in pointing at the idea that a bystander should be expected to know that they have no way of knowing whether or not Linda might be a bank teller.


Or alternatively, the existence of (b) means that (a) is generally understood to mean "Linda is a bank teller, and is NOT active in the feminist movement", regardless of how literally the instructions say to take it.

Either way, it's more a failure of communication between the experimenters and the subjects, rather than a failure of rationality on the part of the subjects. Which is still interesting (and perhaps more interesting), but in a completely different way.


is generally understood to mean

Is it? I didn't interpret it that way, and that isn't what the question says. And why would it be "understood" that is what is being asked? Logically, the answers would be "she is a feminist" and "she is not a feminist", there would be no reason to construct the answers with the information that she is a bank teller if that is what was being asked.

This seems like another detail that rational thinking picks up- if you stop and think about the answer format the "generally understood" interpretation doesn't make any sense.


The answers given don't make sense together, and don't fit the question. I am speculating that the usual way (for people who don't prefer interacting with machines to interacting with people, which would be most people) to read this is to assume someone was careless/sloppy, and to mentally "correct" the answers to the nearest thing that makes the most sense before answering.


I agree that it's a very misleading experiment. People usually don't need to answer questions of pure dry mathematical probability. What most people answer here is a differently interpreted question:

"Which option, do you think, is probably more descriptive of Linda?"

Or "Do you think a bank teller usually has such a life history? How about a feminist bank teller?"

----

I hate it when experimenters ask about probability explicitly. It just test how much probability theory you've learned in school. How familiar you are with the mathematical framework. But it doesn't test everyday reasoning. Good tests for probabilistic thinking shouldn't even mention the word 'probability', but set up some physical or other task where you have to use probabilistic reasoning to solve it effectively.

For example the Wason selection task (https://en.wikipedia.org/wiki/Wason_selection_task) is also an artificial question that is hard to answer, but as soon as a practical story is built around it (alcohol and age), people can solve it much easier.

It's not that people are irrational, they are just bad at abstract thinking devoid of any practicality. We like to think in stories, situations, intentions. Abstract, "robotic" thinking is harder.


The experiment was designed by people who earned a Noble prize. Maybe it's not as stupid as you think.


Maybe you should defend the actual study since its presented here.


Post nytimes on hackernews...hmmf

The problem set was piss in terms of intelligence had a false assumption of a rational/intelligence dichotomy. Of course it's more likely that she is a bankteller than a bankteller plus (insert whatever). This is probability theory 101. But hey, it is nytimes you're reading so what would one expect anyway.

I was more thinking in these lines:

How rational is it for someone to be outspoken against nuclear energy when there hardly exist any real valid renewable alternative to substitute that energy source to begin with? Do it like Germany and rely on the importation of oil from Russia?


> there hardly exist any real valid renewable alternative

Most people don't know that. Exaggerations like http://thesolutionsproject.org/ make them think that wind/solar/tidal could put more than a little dent in energy.


It is possible, at least in principle, to transition to only using solar for example.

Musk gave the example, when unveiling the Powerwall, that to power the whole of the US would only require the whole Texas Panhandle to be covered in panels 25% efficient.

I remember doing the calculation even with the current 15% efficiency and it came out that if each person has something like 9 m^2 of solar panels on their roof, the whole energy demand of the US is covered.

Obviously, that's a shit ton of panels and batteries and we can't just pack them all on the Panhandle for example and it's entirely possible that nuclear is much cheaper (I don't know where to get the numbers to run this calculation).

But it is possible. I think a better criticism is that it's too risky and too slow an avenue in comparison to the risk of climate change


> the whole of the US would only require the whole Texas Panhandle to be covered in panels 25% efficient

Are we building Dyson spheres next?

I believe largish orders of mass-market solar panels (currently ~15% efficient, as you mentioned) are about $10/sq ft, but let's make that $5 for sake of argument. So $218k for one acre.

The Texas Panhandle is 16.6 million acres. So assuming solar becomes nearly twice as efficient for half the cost, the solar panels alone for this venture would be $3.6 trillion. I hesitate to guess what construction, installation, batteries, or infrastructure for 16 million acres would add.

---

So I concede that given best-case economics, impractical funding, and technology that doesn't exist, it could perhaps be done.

More of a XKCD "What If?" scenario than a serious solution.


3.6 trillions is a lot of money if you intend to bulk buy all of it at once today, but comparatively less over, say, 10 years. US GDP is around 18 trillion/year, 2% of the GDP is a lot, but it's certainly realizable.

Besides, it is somewhat meaningless to look at the price of solar panels without a reference. As I stated above, I have no idea how to estimate the cost of nuclear. I also don't know if the new kind of reactor (like traveling wave reactors) are anything more than a concept (it seems like breeder reactors are pretty much at the level of prototypes today).

I think the important point is what the price difference is rather than whether a one-time purchase of solar panels would cost an amount to large to fit in one's wallet.


$3.6 trillion is only a portion of just the materials cost. It's like the price of lumber, as compared to the total cost of building a house.

Elon chose that particular spot because the economic of solar is geography dependent. The infrastructure costs of distributing 100% solar will be astronomical.

I think you could pat yourself on the back if you figured out how to make it work for only 20x the cost of just the panels.

---

> As I stated above, I have no idea how to estimate the cost of nuclear

There is extensive data about nuclear costs, both estimated and empirical.

In 2010, the U.S. Energy Information Administration estimated total captial costs for nuclear power at $5.4k per kW. http://www.eia.gov/oiaf/beck_plantcosts/index.html (This is a rather safe estimate -- the EIA reported inflation-adjusted costs for actual, real-life plants build in the 1960s was $1.5k per kW. But I'll use the higher estimate.)

The U.S. used 3.9 trillion kWh last year, twenty percent of which is already nuclear. So that's $1.9 trillion of capital for 100% of U.S. electricity to come from nuclear.

---

At this point, coal makes the most economic sense of course. And the U.S. has enough for the next couple centuries.


> In 2010, the U.S. Energy Information Administration estimated total captial costs for nuclear power at $5.4k per kW.

Matches what I found here: http://www.eia.gov/forecasts/capitalcost/xls/table1.xls (5.530$/kW) (page with full report here: http://www.eia.gov/forecasts/capitalcost/, release date 2013)

Thing is that it is literally more expensive than photovoltaic at around 4k$/kW (same source). Even the Operation and Maintenance is 5 times as expensive (~93$ vs ~26$).

I must say that I did not expect the evidence to be in my favour here (as evidenced by my initial comment).

> Elon chose that particular spot because the economic of solar is geography dependent.

Yes, the Sun hits more at lower latitude: http://energy.gov/maps/solar-energy-potential

> The infrastructure costs of distributing 100% solar will be astronomical.

> I think you could pat yourself on the back if you figured out how to make it work for only 20x the cost of just the panels.

What's nice about solar is that it can be very localized (with the efficiency loss described above).

I agree that the costs of distributing all the power required for the whole of the US from the Panhandle would require… interesting, distribution architecture.

Going by the solar potential map, I guesstimate the Panhandle average to be about 525 Wh/ft^2/day and the worst parts of Washington and Oregon to be at about 350 Wh/ft^2/day. So let's assume the national average to be 440 and so assume that we need about 120% the panels needed before. Actually, let's bump that up to 140% just because my number is rather rough and the population of the US is mostly on the coasts.

Now, what do you expect the other costs to be? Probably batteries. Going by somewhat sketchy graph on the Internet, I'll approximate the energy use as constant throughout the day, with a 2X peak offset just after the sun sets. So let's assume that one third of the energy needed through the day doesn't need to be stored at all, it's used as it's generated.

Going by: http://www.eia.gov/beta/MER/index.cfm?tbl=T02.01#/?f=A&start... the US used about 100000 trillion BTU => 100E31E12 BTU 0.293071 Wh/BTU = 29 quadrillion Wh AKA 30 trillion kWh (10 times your figure for some reason).

So 30E12 kWh * 2/3 * 1/365 = 53E9 kWh of storage needed

Now, Tesla sells the Powerpack for about 2.6 M$/4MWh. 53E9 kWh * 2.6 M$/4E3 kWh = 34.5E12 $ AKA 34 trillion $.

I don't know what the economies of scale would look like on producing 530 million powerpacks, but I'm pretty sure it would make the cost substantially lower (besides, those numbers are not for bulk orders).

As for panels, going by your numbers, I get 10$/ft^2 = 10$ * (3.3 ft/m)^2 = 109 $/m^2. Assuming 15% efficient panels and assuming 420 Wh/ft^2/day * 0.15 = 686 Wh/m^2/day. So 109 $/m^2 / .686 kWh/m^2 = 159 $/kWh

30E12 kWh * 159 $/kWh = 4.77E15 AKA 4.77 quadrillion $.

30E12 kWh / 36524 h 4000$/kW = 13 trillion $, so something is off.

Indeed, your 10$/ft^2 is way off, it is for small residential installations: http://solar-power-now.com/solar-panel-cost-per-square-foot/

Now, I don't know what to think of the ~350-fold price reduction for really large installation, but it seems sensible going by electronics bulk pricing.

Assuming a more conservative 10-fold reduction for the price of batteries, we'd get 13+3.4 trillion $ = 16.4 trillion $.

Going by the EIA report, nuclear would be about 5/4 the cost/kW of solar (presumably without storage), which is about on par with the price of storage + solar: (13+3.4)/13 ≈ 5/4

> The U.S. used 3.9 trillion kWh last year, twenty percent of which is already nuclear. So that's $1.9 trillion of capital for 100% of U.S. electricity to come from nuclear.

Yes, but they may be at the end of their life: http://www.eia.gov/tools/faqs/faq.cfm?id=228&t=21

---

At any rate, I don't have much of an issue with nuclear, at least the newer breeder designs since breeders produce very little waste and the new designs are presumably very safe. But obviously, all costs being equal (assuming no error in my calculations) I'm 100% in favour of renewables over nuclear. Renewables don't have the international concerns linked with exporting nuclear reactor technology for one and that's a serious drawback of nuclear honestly.


Thanks for the additional info.


That price is not unreasonable. The US consumes roughly 5 trillion kWh every year. At a low-end estimate of 10 cents per kWh we have a 15 trillion dollar budget to build a facility that lasts 30 years. If we prioritize the environment over cheap power, we could double or triple that budget (and if people use less power in response to higher prices, that makes our job even easier). Obviously it would take a while to ramp up production, but that's not hard, just slow.


> Then they asked the subjects which was more probable: (A) Linda is a bank teller or (B) Linda is a bank teller and is active in the feminist movement.

I don't see how either guess is generally less likely then the other one. I'd say it depends rather on your mental framework whether you choose A or B. Therefore, I'd argue a frequentist would rather choose answer A whereas a Bayesian would be inclined to choose answer B.

The difference between rationality and intelligence imo lies in how you deal with unknown Unknowns. A person who is merely rational wouldn't consider them whereas a intelligent person would have the ability to deal with them in some way.

Take the infamous Westbro family for example. In Louis Theroux's movie they came across as rational human beings assuming that their world view is correct. Indeed they appeared to have a happy family life. If everyone would (want to) follow their rules than the world would probably work. They, however, fail to realize that they haven't found a magic bullet and that other people are able to lead equally fulfilling lifes. Their attempts of "saving" these people are bound to be a futile waste of time and thus I wouldn't consider them as intelligent.


I don't see how either guess is generally less likely then the other one.

How could A possibly be less likely than B?? The occurrences of B are a strict subset of A, there is no possible way B could happen more frequently.


Maybe I am a bit confused but I just meant to imply that A and B could be considered equally probable if you're a Bayesian who has absolute confidence that she's the kind of person who is active in the feminist movement.


100% now is 100% forever, in the Bayesian framework.

If a Bayesian agent is absolutely certain that she is active in the feminist movement, it would mean that absolutely no evidence could convince it of the contrary. Even if we learn that she was kidnapped and forced to work in, say, Saudi Arabia as a bank teller where she's forbidden from feminism, even then, the Bayesian agent would have to still stay at 100% confidence that she's active in feminism.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: