The heat maps of eye movement were fascinating...I noticed the almost complete lack of activity on the advertisements immediately. Users appear to do what I do, just simply ignore them or try and filter them out.
What I noticed was that most of the attention was between the left side and center of the page. The only attention point on the right side was the scroll bar. Made me think of the NEXTSTEP window manager where the scroll bars were on the left. The explanation at the time for this apparent deviation from the "standard" was that it kept the scroll bar closer to the user's primary attention areas on the page.
In Mac OS X i guess Apple caved to popular convention and moved the scroll bars to the right.
I wonder why only a particular part of the scroll bar appears to have attracted long gaze patterns? In my own case, I'm typically looking at the slider as a metric of page length, not a particular spot on the bar itself.
I noticed that as well; I assume people basically all tended to check page length at the same point through the page--presumably at some boundary between content sections.
I'd be interested in seeing similar research on Google's page, with their distinct take on ad selection, format and placement.
It appears that people unconsciously set an "attention getting threshold", things that are designed to attract attention go above that threshold and are simply ignored. This seems to be well known hence the various forms of pop-up and pop-over ads that are showing up on sites again now even though they were proven to annoy users to the point of having to derive engineering solutions to the problem.
The lack of detail in what their users were actually doing on the sites they tested makes it hard to tell what they're trying to dispel.
If you're giving users a specific task to perform on a specific website in a controlled environment, then of course scrolling isn't going to deter them. The argument against scrolling is that if you're trying to attract users to your site---to pick your site over other potential sites---then scrolling will be a factor when people want to make a quick decision if your site is useful or not.
Well, this is really the problem. The fold really only becomes a huge factor when you're trying to win someone's attention in that snap "does this site have what I need" judgment; once the user's mind is made up that, yes, this is the right place, it becomes largely irrelevant.
But somewhere along the line, a useful guideline for getting people to stick around and explore a page or a site morphed into a hard rule of "never have anything at all below the fold, ever". Which is frankly stupid.
They also didn't mention horizontal scrolling, which I personally cannot stand. Vertical scrolling is not nearly as objectionable, though in forms it's more of a bother than on pages where you are just reading.
Jakob Nielsen helped to popularize the myth that web users don't scroll down. It led to an era of website design that crammed as much onto one page, leading to busy, crowded homepages. It wasn't until later that early adopters of CSS made websites simple again, in part to avoid compatibility issues with Internet Explorer.
I'd rather blame people that took his advice to extremes and tried to shove everything above the fold, which isn't what he advocated, just to add the most important stuff up top:
"Is all key information visible above the fold so users can see it without scrolling?"
He says that for the same reason newspaper articles are written that way - not because people won't scroll, but to capture their attention so that they WILL scroll to follow the article, rather than move on to something else.
Of course you can blame the patient, so to speak, because they took the doctor's advice to the extreme. Except that everything Nielsen writes/promotes is couched in such a way that it looks black & white, and so naturally people take it to extremes. Because he encouraged them to, in the way that he presents his information.
Consider the surrounding circumstances to the "pure" information he posts:
1. He says it is SCIENCE!
2. He does not allow for differing levels of user engagement. (If you're interested, you will not behave the same way as somebody who has no interest.)
3. He does not reveal the type of user / type of content being tested. (Are they bored to tears? Is it well-written, captivating content? Is it something relevant to their life?) Would this be hard? Yes. But you cannot evaluate the "facts" without it!
4. He may waffle a little now and again, and say something that's not entirely "my way or the highway" strident -- e.g. "Is all key information visible above the fold?" -- but...
5. He NEVER lists proven counter-examples.
Examples for #5: many "sales letters" of products on the internet (esp. info products) rely on heavily engaging the user's interest FIRST, and then following a build-up-let-down pattern of excitement. Often their critical info -- price, call to action buttons -- are well below "the fold." Waay, way down there... they only reveal it to people who are biting the hook.
And this works. Really, really well. It works because it engages the visitor, and builds suspense, rather than just laying it all in front of them.
But has Nielsen ever addressed this?
Ha!
He uses the attachment that people have to "science," "statistics," and "facts," and exploits it to provide certainty in an an uncertain world.
Since the point of a substantial number of his articles is that quick and dirty usability testing is the only reliable way to make usability decisions I don't think you can reasonably fault him here.
I don't think your account correctly characterizes his approach to usability. I think it correctly characterizes the perception of his approach to usability in many who dismiss him.
He has written a few pieces about doing your own low-budget user testing, yes, but he doesn't encourage it after making his claims about specific things you should or shouldn't do.
That is, he doesn't say "These results are for our site, you should test it yourself for yours. Our findings may not be relevant for your situation." after he's talked about findings or made recommendations.
And he doesn't address situations that are directly in opposition of the findings he does report -- e.g., he has never covered sales letters as I described.
When he does write about "do it yourself" user testing, he does it in separate articles that are only about that topic. They are not situated in the broader context.
He is obviously a very smart person so I don't think that this is accidental.
And he only writes those DIY pieces once or twice a year. Generally speaking his company sells 3-day usability testing workshops for $23,000.
Usually when he makes claims about, say, readability, no examples are offered. I was shocked to find this one.
You'll notice that nowhere in his explanation of the findings does he conjecture that maybe reading comprehension is low because it's bloody boring AND poorly written, and nobody gives two shits about Nebraska's dubious tourist attractions.
He says maybe it's because the language is "promotional." But, again, deathly boring and irrelevance to user's lives doesn't come into his analysis picture at all.
And he doesn't tell you to test your own content, either.
I have all his books and I've read the vast majority of his articles, too. You can dismiss me as somebody who merely has a "perception" problem, but that doesn't change the effort I've put into it.
PS - He reviewed a book on manipulating customers, through fiddling with neuroscientific knobs, thusly:
---
I highly recommend that you read this [book] for two reasons:
* The book's indispensable design advice will grow your business.
* You must teach your children to recognize this new class of manipulation.
---
So, it's cool when you're only manipulating OTHER people's kids.
I don't know why people think of him as if he's some kind of folk hero.
"Generally speaking his company sells 3-day usability testing workshops for $23,000."
Don't you think that the selling of $23,000 usability testing workshops might constitute an incentive for him to push for more hands on usability testing rather than the blind acceptance of his findings?
But you said that he is a powerful proponent for low-cost, self-run tests... and he's not, really, as I pointed out. Lukewarm at best, perhaps.
Meanwhile, why do you think he doesn't recommend testing for differences when he makes his announcements regarding findings? Or even mention that they vary?
This is just pure speculation, but it seems as if he's more concerned with sounding authoritative -- so people hire him -- than pointing out the ambiguities in statistical usability research.
I'll point out that patio11's bingo card creator gets more conversions for every single order button he's added, even as they get to the point where they're "above the fold". Your own criticism is much too 'black and white', but I'd say that it is also MYOPIC!
I never said that the above-the-fold findings aren't true in many cases. Never!
In fact, I know they are true... in many cases.
However, there are quite a few cases where they are not true at all. For example, direct-marketing type sales letters, which work just as well on the internet.
Which is, again, all I said. Nielsen acts as if everything he says is universally applicable, but there are quite a lot of counter-examples that also work exceedingly well. Ergo the findings are not universally applicable.
HN is supposed to be a bastion of scientific thought. What is it about my argument -- that he leaves out evidence -- that is so hard to understand?
The fact that your counterexample doesn't work- patio11 has shown that his direct-marketing type sales letter gets more conversions with each additional 'above the fold' actionable element.
It's also an unusual onus to expect him to update his past writings when future evidence that isn't his to display suggests anything other than what he has found.
Maybe, maybe not. What would be interesting is a longitudinal study about this.
I can believe that ten years ago people didn't scroll, but instead our behavior changed over time as we adapted to the medium.
So, ten years ago "users don't scroll" was good advice, but designers made the mistake of filing that away in the Known Fact bin without reflecting more carefully and/or actually testing it.
I'm curious as to what their test protocol is that it has dispelled every other posting on this topic. A person told to buy shoes at Amazon.com isn't going to be stifled by a scrollbar, but someone who isn't being directly told to interact with the website in a given manner doesn't seem as likely to scroll down looking for content that isn't found on a sparsely populated top page.
Worth noting that while you can have content below the fold and signal to users to find it, that's not true with branding and identity features - those should be prominent and above the fold.
Further, as a commenter on the article pointed out, there's benefit in identifying your audience. Someone on NYT is obviously willing to scroll. Can the same really be said of someone shopping for a luxury car and looking to be wowed?