Because I usually don’t want to talk to computers in front of other people? It isn’t that it feels silly, but that it’s incredibly distracting for everyone to hear every interaction you have with a computer. This is true even at home.
Maybe we can type the commands, but that is also quite slow compared with tapping/clicking/scrolling etc.
> If I am factually correct, by definition, everyone who disagrees with me is irrational and can't be reasoned with.
No, that doesn’t follow at all. Your arguments could be bad or irrational in themselves (right for the wrong reasons), and other people could hold beliefs logically follow from plausible, but wrong, premises.
I don’t think you’re right about that. LLMs are very good for exploring half-formed ideas, (what materials could I look at for x project?), generating small amounts of code when it’s not your main job, and writing boring crap like grant applications.
That last one isn’t useful to society, but it is for the individual.
I know plenty of people using LLMs using for stuff like this, in all sorts of walks of life.
I have a very similar frustration with the complexity here, and found that scrolling tiled window managers (like PaperWM, Niri, etc.) might actually be the answer. All your windows are in a line, press the shortcut until you hit focus the one you want. Reorder with a shortcut or with a mouse.
The main problem with them is system support, they are buggy when tacked on top of a desktop OS (PaperWM), or require a pretty finicky custom setup (Niri).
That always struck me as pretty overblown, given that before map apps, people got lost all the goddam time. It was a rare trip with any complexity that a human map reader wouldn’t make a mistake or two.
LLMs aren’t competing with perfect, they are competing with websites that may or may not be full of errors, or asking someone that may or may not know what they are talking about.
No. The point of progress is to improve the outcome. Improved outcome does not always align with convenience, and this is one example.
Critical thinking is inconvenient and does not scale, but it's very important for finding the truth. Technology that disincentivizes critical thought makes it easier to spread lies.
> Critical thinking is inconvenient and does not scale, but it's very important for finding the truth. Technology that disincentivizes critical thought makes it easier to spread lies.
True. At the same time, technology that removes the need for critical thinking is bona fide positive form of progress.
Think of e.g. consumer protection laws, and larger body of laws (and systems of enforcement) surrounding commerce. Their main goal is to reduce the risk for customers - and therefore, their need to critically evaluate every purchase. You don't as much critical thinking when you know certain classes of dangers are almost nonexistent; you don't need to overthink choices you know you can undo.
There are good and bad ways of eliminating inconvenience, but by itself, inconvenience is an indication of waste. Our lives are finite, and even our capacity to hope and dream is exhaustible. This makes inconvenience our enemy.
The examples you listed work because they increase trust to a point people feel safe enough to not second-guess everything. I disagree that AI in its current form can be trusted. Food safety is enforced by law, correctness in Google searches isn't enforced at all, in fact Google is incentivized to decrease the quality to reduce running costs.
So yes, convenience and progress are strongly correlated but they're not the same.
Ironically, just yesterday I had a situation that made me change my mind on some of this; it convinced me that the world isn't ready for "AI results" in search in their current form.
Imagine: an impulsive person, suddenly facing the need to change the ownership structure of their mortgage, worried they'll have to pay a lot for this. A person who doesn't really know first thing about it. They enter a query in Google; because of their lack of domain knowledge, the query is missing one word. Without that word, the query matches two related but distinct kinds of ownership structures. Results for both come, and then AI summary happily blends them together. The person sees an explanation, panics, and shouts my ear off about the bad situation they've been put in by a third party. I'm confused (I know a bit more about this, but not much); they show me the phone.
I'm staring dumbfounded, looking at a seemingly nonsensical AI summary. But it's not the text that made me pause in shock - it's the fact that the other person took it as gospel, and yelled it at me.
Garbage in, garbage out, as they say. The way I see it now, the biggest problem isn't the garbage out that sometimes comes out of LLMs. The problem is the "garbage in" - specifically, the garbage that passes for thinking and communication among most of human population. An LLM may reason 100% correctly - it won't help when user supplies "wrong figures" without realizing it, and then acts on answer without thinking.
The world is not ready for systems that indulge people in their bullshit.
There’s plenty of replacements which are fine. Many are better to use for many tasks. The problem is lock-in in professional contexts. Having a problem with some feature in a PSD? “I don’t wanna pay for Photoshop” isn’t usually an acceptable excuse.
If open source projects and other companies had gathered around an open file format, maybe there would be some leverage, but they all use their own formats.
Interestingly, vertebrate palaeontology switched to clade-based taxonomy a while ago (although genus and species cling on). Wikipedia tries to attach ranks to the clades of extinct animals such as dinosaurs, which have no basis in the scientific literature to normalise them with the taxonomy of extant animals.
Palaeontology make the notion of species kinda silly, as all organisms can be followed as a smooth grade all the way to the origin of life.
Maybe we can type the commands, but that is also quite slow compared with tapping/clicking/scrolling etc.
reply