Hacker Newsnew | past | comments | ask | show | jobs | submit | jasonhong's commentslogin

I've used The Design of Everyday Things in many classes I teach. I would agree that it's not practical, but that's not its goal. Instead, it gives you frameworks for thinking about things as well as vocabulary for talking about those things.

Off the top of my head, some of the key ideas include:

* Affordances, that objects should have (often visual) cues that give hints as to how to use things * Mental models, that every design has three different models, namely system implementation, design model, and user model, and that the design model and user model should try to match each other * Gulf of Evaluation (the gap between the current system state and people's understanding of it) and Gulf of Execution (the gap between what people want the system to do and how to use the system to do it) * Kinds of Errors and how to design to prevent and recover from them, e.g. slips (chose the right action but accidentally did the wrong thing, e.g. fat finger) vs mistakes (chose the wrong action to do)

What's particularly useful about Norman's book is that these key ideas apply for all kinds of user interfaces, from command-line to GUI to voice-only to AR/VR to AI chatbot. I'd encourage you to think about this book in this kind of framing, that it gives you general frameworks for reasoning and talking about UX problems rather than specific practical solutions.


DOET (neé Psychology of Everyday Things) deeply influenced me. Articulated things I had observed, experienced. Expanded my thinking.

I was using, teaching, and developing for AutoCAD at the time. Knew nothing about UI beyond my intuition. Just perplexed by how difficult it was for most to use.

Reflecting back, Norman's treatment of mental models and kinds of errors were the most impactful, evergreen design challenges I faced.


I read the Design of Everyday Things and most of it was painfully obvious examples and was overly philosophical.

Design is solving problems so they're intuitive for the user. Obviously a door with a handle shouldn't be a push door, I don't really think you need to write a book about it. And the types of people creating bad design are generally constrained by cost, time, or practicality, not necessarily by education.


> Obviously a door with a handle shouldn't be a push door, I don't really think you need to write a book about it.

It’s common to illustrate principles with examples that appear obvious, i.e. that everyone agrees on, so that after having it conceptualized as a principle, you’ll apply it in less obvious circumstances. Many things are obvious only in hindsight.

> And the types of people creating bad design are generally constrained by cost, time, or practicality, not necessarily by education.

That’s not true, because a lot of flawed design is being promoted and defended in public as the thing to do.


> Obviously a door with a handle shouldn't be a push door, I don't really think you need to write a book about it.

And yet we've all encountered push doors with handles many times.

> And the types of people creating bad design are generally constrained by cost, time, or practicality, not necessarily by education.

Good design is far cheaper and easier than bad design in the long run. Being able to articulate the benefit of good design such that stakeholders provide the resources for good design is perhaps one of the most important reasons to have such an education.


uh, the fact that this is written down and carefully put in frameworks is a good thing. Otherwise you can say any academic book is intuitive. the fact that it sounds obvious means they're getting the message across. because lord knows it was needed and there's plenty of failed products and ideas because of shitty design.


Wanted to share this funny SETI@home prank that Monzy (https://en.wikipedia.org/wiki/Dan_Maynes-Aminzade) did in 1999, where he created a fake VB app that tricked a coworker into believing that his computer successfully found an extraterrestrial signal.

The original site is down, but jump to November 5, 1999 to see the screenshot. https://web.archive.org/web/20030404093458/http://www.monzy....


Sigh. I miss websites like this.


Small personal web is best web.


Yishan Wong jumpscare


Lichess has a series of puzzles you can try where underpromotion is the theme (which is unfortunately a major giveaway to solving these puzzles, since they otherwise be rather hard to solve)

https://lichess.org/training/underPromotion


The general term for what you're describing here is a Dominant Design, and it has a lot of the characteristics of what you intuited. https://en.wikipedia.org/wiki/Dominant_design


In May earlier this year, the New York Times had a similar article about AI not replacing radiologists: https://archive.is/cw1Zt

It has similar insights, and good comments from doctors and from Hinton:

“It can augment, assist and quantify, but I am not in a place where I give up interpretive conclusions to the technology.”

“Five years from now, it will be malpractice not to use A.I.,” he said. “But it will be humans and A.I. working together.”

Dr. Hinton agrees. In retrospect, he believes he spoke too broadly in 2016, he said in an email. He didn’t make clear that he was speaking purely about image analysis, and was wrong on timing but not the direction, he added.


It costs a non-trivial amount of money to file a patent in the USA


And even more to enforce it if granted. You can have all the patents in the world but with without being able to file against infringing parties they’re just documents.


If what is behind the patent is granted free to use, what’s to enforce? How would I infringe on “free to use for everybody “? I believe OP’s idea is to file the patents defensively to block others from filing stupid patents as in TFA.


The last person in usually gets the best deal, in that they can get preference and push everyone else (previous investors, founders, and employees) down. If things goes south, they get their money out before anyone else.


Why don't early investors put clauses in their investment to protect themselves against being screwed over by later investors? It seems like an obvious thing to ask for if you're giving someone a lot of money, so I'm assuming there must be a very good reason it's not done.


Early investors (the main ones at least) usually get pro-rate rights - which means you can invest in later rounds to maintain your ownership percentage (i.e a later round dilutes your ownership, so you invest a bit until the ownership stays the same).

But the pref stack always favors later investors, partly because that's just the way it's always been, and if you try to change that now no one will take your money, and later investors will not want to invest in a company unless they get the senior liquidity pref.


The VCs should, they're called anti-dilution measures

Its less financially/legally saavy parties like angel investors and early employees who (sometimes) get screwed out of valuation


Isn’t everyone “the last” at the moment they are taking participation in the round? If someone thinks they’re gonna get preferential treatment in Series C or D, and then comes someone in E with preferential treatement, then


Having been on the program committee for some of these conferences, this issue of limiting number of submissions was being discussed long before GenAI. Specifically, there was talk of a few highly prolific security researchers that submitted 15-20 papers to these conferences each cycle, with pretty good quality too.


I'd also add (4) be incredibly curious about lots of things; (5) surround yourself with other smart, curious, and committed people who have a culture of critiquing ideas; and (6) devote a lot of time to deep thinking.


(8) be good in counting things, (h) be consistent in your thinking, (10) have a good memory, (11) be good in counting things, (12) refrain from making silly comments


I'd say it's worse than that. This new policy of vetting will be extremely high cost in terms of time, money, and lost opportunities for students and universities, while also be rather useless in practice. Seriously, what student applicant won't clean up their social media profile? What threats will actually be caught by this approach?

This whole policy is dumber than conventional security theater.

But then again, that's the point of this policy. It has the thinnest veneer of being for a legitimate purpose while hurting those that this administration wants to hurt.


> Seriously, what student applicant won't clean up their social media profile?

That is the goal of the policy.

> What threats will actually be caught by this approach?

Catching threats is not the goal.


> what student applicant won't clean up their social media profile?

Every social-media background check I've seen searches extant and archived media.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: