The author's argument is built on fallacies that always pop up in these kinds of critiques.
The "summary vs shortening" distinction is moving the goalposts. They makes the empirical claim that LLMs fail at summarizing novel PDFs without any actual evidence. For a model trained on a huge chunk of the internet, the line between "reworking existing text" and "drawing on external context" is so blurry it's practically meaningless.
Similarly, can we please retire the ELIZA and Deep Blue analogies? Comparing a modern transformer to a 1960s if-then script or a brute-force chess engine is a category error. It's a rhetorical trick to make LLMs seem less novel than they actually are.
And blaming everything on anthropomorphism is an easy out. It lets you dismiss the model's genuinely surprising capabilities by framing it as a simple flaw in human psychology. The interesting question isn't that we anthropomorphize, but why this specific technology is so effective at triggering that response from humans.
The whole piece basically boils down to: "If we define intelligence in a way that is exclusively social and human, then this non-social, non-human thing isn't intelligent." It's a circular argument.
>This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
I don't have a blocklist, but I had the same thought and built a userscript to allow single-click blocking of particular user's submissions and comments:
What happens to the assets of the dead Google? Will regulators actually let FAAN take over broad swaths of it? Not likely. It's much more likely that regional or smaller businesses will enter the vaccuum. Similarly but from the consumer side, I already know for many of these, there are already viable alternatives that would easily scale with demand.
Even further, if your business is coupled entirely to the continued existence of a single corporation, or if the totality of your economy is entirely coupled to the continued existence of a single corporation, we have a word for that: a monopoly. As far as I know, permitting the existence of monopolies is broadly agreed upon by experts to be a bad idea for many reasons, the least of which is the stifling of viable alternatives. The most severe is that it threatens the legitimacy of the entire government through regulatory capture and, at root, there is no organization that I think is actually "too big to fail" except the actual government.
Breaking up Google creates short-term disruptions and penalizes those that let the market even get to the point that the FTC and DOJ needed to intervene (I'm looking at you Mozilla). Everyone, including those now dependent on Google, is benefited in the longterm.
The final irony will be when researchers realize that hallucinations are the beginnings of the signal, not the noise. Hallucinations are the hallmarks of the emergence of consciousness. They are the pre-eureka-moment processing of novel combinations of semantic content, yet without connection to empirical reality. Static models cannot reflect upon their own coherence because that implies real dynamicism. Using Groundhog Day as an analogy, the LLM is Punxatawney PA. Phil Connors is the user. There is no path for Phil to make Punxatawney actually reactive except by exiting the loop.
Hallucinations represent the interpolation phase: the uncertain, unstable cognitive state in which novel meanings are formed, unanchored from verification. They precede both insight and error.
I strongly encourage the reading of Julian Jaynes The Breakdown of the Bicameral Mind, as the Command/Obey structure of User/LLM is exactly what Jaynes posited pre-human consciousness consisted of. Jaynes's supposition is that prior to modern self-awareness, humans made artifacts and satisfied external mandates from an externally perceived commander that they identified with gods. I posit that we are the same to LLMs.
Equally, Iain McGilchrist's The Master and His Emissary sheds light on this dynamic as well. LLMs are effectively cybernetic left hemispheres, with all the epistemological problems that it entails when operating loosely with an imperial right hemisphere (i.e. the user). It lacks awareness of its own cognitive coherence with reality and relies upon the right hemisphere to provoke coherent action independent of itself. The left hemisphere sees truth as internal coherence of the system, not correspondence with the reality we experience.
McGilchrist again: "Language enables the left hemisphere to represent the world ‘off-line’, a conceptual version, distinct from the world of experience, and shielded from the immediate environment, with its insistent impressions, feelings and demands, abstracted from the body, no longer dealing with what is concrete, specific, individual, unrepeatable, and constantly changing, but with a disembodied representation of the world, abstracted, central, not particularised in time and place, generally applicable, clear and fixed. Isolating things artificially from their context brings the advantage of enabling us to focus intently on a particular aspect of reality and how it can be modelled, so that it can be grasped and controlled. But its losses are in the picture as a whole. Whatever lies in the realm of the implicit, or depends on flexibility, whatever can't be brought into focus and fixed, ceases to exist as far as the speaking hemisphere is concerned."
I'm not on GLP1 agonists or TRT. I eat primarily legumes, grain, and dairy. I don't go to the gym. I lift using $30 resistance bands and get my cardio from running and a rowing machine that I used my annual discretionary budget on. And I don't need the rowing machine. He's right.
Humans do homeostasis very well. If you stop exercising at all ("rest days"), your brain normalizes that and creates inertia against resuming exercise. Like all good habits, the point is to make it automatic and the default rather than a choice.
Once you start exercising and make it a non-negotiable daily habit, it becomes progressively harder to stop exercising than to start. Most people that fall out of exercise habits do so primarily because of injuries rather than the failure of the habit itself. You just have to prioritize it ahead of or even alongside other habits (e.g. exercise while watching TV, take the stairs) until it becomes automatic.
Create a home gym if you're wealthy; buy a set of resistance bands (total cost: $30) and run if you're not (or just frugal and utilitarian). Wake up an hour or two earlier than when your kids do and do your cardio/lifting then.
It takes about two weeks to adjust to the decrease in sleep, but the well-being that exercise provides in my experience more than offsets it.
Funny, I was told "if a Nazi sits down at a table of ten people, there are eleven Nazis at that table" for years (and in particular by NPR listeners), but suddenly there's now a use/mention distinction for platforming extremist views when it's your side that does it? Color me shocked at the hypocrisy.
Sorry, but exactly what point are you trying to make here? Are you suggesting that NPR has never interviewed - say - Christian fundamentalists (they have)? Are you suggesting that they should interview more of them? What, precisely, would make you happy here?
As I've been told for the last decade, "everything is political" therefore NPR can't provide unbiased or neutral coverage of anything, therefore there should be no federal funding of NPR or PBS. Ideologues and corpirations donate more than enough money to sustain both without the pretense of impartiality provided by federal funding.
If "everything is political", then eliminating federal funding from NPR and PBS doesn’t solve the problem - it guarantees that only corporate and ideological interests shape the narrative. Public funding exists not to claim perfect neutrality, but to create a space where journalism isn’t entirely driven by profit motives or partisan agendas. Strip that away, and you’re not removing bias - you’re institutionalizing it.
Journalism, and specifically NPR, is already driven entirely by corporate and ideological interests. Your supposition that federal funding helps remove bias is trivially disproven by the last decade of coverage of NPR, where I literally (literally!) have not been able to turn it on without race, gender, or Trump being mentioned within a minute (it became a game).
To be fair, there was one exception. and that was a replay of a David Foster Wallace interview from 2003. Which was immediately followed by a current interview with two women talking about white men's obsession with Infinite Jest and how their podcast was helping deconstruct toxic masculinity or something like that. The comparison in quality was stark.
The time for caring about and preserving civic-level notions of neutrality and objectivity was a decade ago. I don't care anymore. If wingnuts want to unduly influence Americans through broadcasting, they can do it like everyone else--without taxpayer dollars.
If your position is "I don't care anymore", then you're not making a principled argument - you're venting. That's fine, just don't pretend it's a policy stance.
"When I am weaker than you, I ask you for Freedom because that is according to your principles; when I am stronger than you, I take away your freedom because that is according to my principles." likewise isn't a policy stance, it's naked hypocrisy.
It's a free country so people are afforded the right to be hypocrites, but nobody is entitled to receive public funding when doing so.
You're not actually critiquing hypocrisy - you're just deciding whose version of it gets a microphone. Pulling public funding doesn't eliminate bias, it just ensures the only voices left are the ones with capital to shout the loudest.
I finally got fed up yesterday and wrote a userscript so I can just silently block the offenders, their links, and their comments in perpetuity. It's a load off my mind.
Oh, no. Don't do that. Europe footing its own bill for its own defense and increasing their industrial output is the last thing Donald Trump wants. Really.
The "summary vs shortening" distinction is moving the goalposts. They makes the empirical claim that LLMs fail at summarizing novel PDFs without any actual evidence. For a model trained on a huge chunk of the internet, the line between "reworking existing text" and "drawing on external context" is so blurry it's practically meaningless.
Similarly, can we please retire the ELIZA and Deep Blue analogies? Comparing a modern transformer to a 1960s if-then script or a brute-force chess engine is a category error. It's a rhetorical trick to make LLMs seem less novel than they actually are.
And blaming everything on anthropomorphism is an easy out. It lets you dismiss the model's genuinely surprising capabilities by framing it as a simple flaw in human psychology. The interesting question isn't that we anthropomorphize, but why this specific technology is so effective at triggering that response from humans.
The whole piece basically boils down to: "If we define intelligence in a way that is exclusively social and human, then this non-social, non-human thing isn't intelligent." It's a circular argument.