Hacker News new | past | comments | ask | show | jobs | submit login
UX and the Civilizing Process (meltingasphalt.com)
38 points by agmm on April 14, 2022 | hide | past | favorite | 7 comments



There are many sides to this. E.g., a couple of days ago, I had a conversation with the owner of the next-door pizza shop. He doesn't use computers or the internet. Years ago, he had participated in a course, but found it frustrating. He wanted to understand the grammar of this, how it worked, and how things were related and connected to each other. (He was actually using the term "language".) Unable to do so, he turned his back to computers.

You could call his a civilizatory demand. Compare modern UX to this, it's more like, "here's your hand, use it for everything, just try, don't think about it (we won't make you think, promise)." Which might be closer to what figures as "medieval barbarism" in the article.

Also, the bit on the "Mother of All Demos" may be a bit unfair: The system relied on multiple users' views rendered on a single screen and respective portions of this being relayed to the user via CCTV, and, for this particular event, input and output were miles away from the machine that ran the demo, connected via a network bridge. Modern videos on social media showing a person manipulating their smartphone can be messy in appearance, as well… (On the other hand, Engelbart's real ambition, the "bootstrapping" process of symbol manipulating minds and symbol processing machines elevating each other to levels never imagined before, a project very much civilizatory in nature, probably never became a reality. And this may be due to interfaces, which teach us how to think about this process and the systems involved.)

Edit: Another early "civilizatory" approach may have been J.C.R Licklider’s "Man-Computer Symbiosis", where human and machine meet on a shared, common ground (the interface), instructing each other by suggestion, thus eventually reaching a goal of refined understanding and problem insight.


Present day UX grew out of Englebart and co's notions of computer interfaces for children. They were meant to be discoverable. They weren't meant to avoid thinking. That's still the guiding spirit in it.

One could argue that children are "barbaric," but I really don't think it's useful framing.


Alan Kay's work is very much about children, but Engelbart's project was about (qualified) grownups and future civilization. (Children at the age of about 10-12 were especially interesting to Alan Kay, because they're at in a transitional state from visual dominance to symbolic dominance in thinking, following the work of Piaget and in the updated version by Jerome Brunner.) Regarding, what is discoverable, the civilizing aspect may be well in what is shown and in what way. E.g., Licklider's examples are actually conversations about constraints, without verbalising constraints specifically.

Edit: One notable "relict" of Engelbart's project is the outline view in MS Word (which, out of context, may not appear to be that remarkable, while it was much about how texts should be organized and understood).


*) Jerome Bruner (just one "n")

Wile Piaget thought that the newly established dominance would replace the previous one, Bruner showed that the older one remained and could be observed in parallel. (Which is rather decisive to Alan Kay's work.)


I, for one, prefer that the computers I interact with be honest about being computers, and not put on a persona (mask) of poorly simulating a human. I think that's why, as a screen reader user, I prefer speech synthesizers that, while highly intelligible and listenable, don't try too hard to emulate natural speech (usually by using recordings of an actual human voice). Today my cofounder talked with me about possibly using some kind of chatbot for customer service, and I'm strongly opposed. But I suppose I'm in the shrinking minority.


> These were instructions aimed at the rich nobility. Among serfs out in the villages, standards were even less refined.

Now that needs a citation, and embeds a whole bundle of assumptions in itself. It's necessarily true, because the rich decided what "refined" meant and defined it as what they did.

But primary record from many times and places show this sort of disgust going both ways. A lot of "uncivilized" behavior has been direct response to material conditions, and "civilized" or "refined" behavior was defined in opposition to those needs. The classic example is bathing: if you spend your time up to your waist in clay forming bricks, you're gonna need to wash it off every day. A clear way to signal that you don't do that is to not bathe.

From the perspective of wealthy city dwellers, I'm sure the habits of rural serfs were disgusting. On some specific habits I'm sure I would agree. But that they were generally, across time and place, more disgusting than the wealthy? mmm idk.


I disagree that software should be thought of as a person, even if it's a polite person. Software is and should be thought of as a tool. For routine operations by an experienced user, a tool feels like it's part of your own body. It's like riding a bicycle. Most people who ride are not aware of countersteering, but they countersteer regardless, because the bicycle has become effectively a prosthetic body part. This is the ideal way to operate software.

But unfortunately, this feeling is fragile. The requirements are stricter than mere politeness. E.g. latency is very important. Any kind of delay (including animations) breaks the feeling of software being part of your own body. There is already a built in delay/animation: the movement of your own fingers. This is the only natural delay. Any additional delay on top of it breaks the connection between tool and human body, and and removes the feeling of merging of the two.

And any kind of automatic behavior breaks the feeling. I'm opposed to autosaving, because it's a reminder that the software is not literally a part of my body. It's better to cultivate a habit of manual saving. Predictability is as important as responsiveness. Tools are not "smart".

I've never felt this kind of man/machine unity from a mobile UI. Maybe it's possible in theory, but touch screen UIs focus on dragging, which makes latency even more obvious, by translating time delays into spatial displacements. And it's only possible with experience, which designers constantly fight against by making UI changes. It's likely that the majority of computer users have never felt it at all, and that is a real pity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: