"It might surprise the author to learn that there are many people who:
1) Have tried lisp and clojure
2) Liked their elegance and expressiveness
3) Have read through SICP and done most of the exercises
4) Would still choose plain old boring easy-to-read always-second-best Python for 90% of use-cases (and probably Rust for the last 10%) when building a real business in the real world."
This is me to a T — even when I'm building hobby projects. The point of writing any code, for me, is most of all to see a certain idea to fruition, so I choose what will make me most productive getting where I want to go. And while I still worship at the altar of Common Lisp as an incredibly good language, the language matters much less than the libraries, ecosystem, and documentation for productivity (or even effective DSL style abstraction level!), so eventually I have had to make my peace with Python, TypeScript, and Rust.
Tacking on, part of seeing it to fruition, and continued lifetime, is to ensure you can communicate the intent and operation to a large group of potential successors and co-workers.
An incredible epiphany that you can't transmit may not be as useful as a a moderately clever idea you can.
"Deprivation of material things, including food, was a general recollection [of Zhu adults] and the typical emotional tone in relation to it was one of frustration and anger…. Data on !Kung fertility in relation to body fat, on seasonal weight loss in some bands, and on the slowing of infant growth after the first six months of life all suggested that the previously described abundance had definite limits. Data on morbidity and mortality, though not necessarily relevant to abundance, certainly made use of the term “affluent” seem inappropriate."
"While the !Kung way of life is far from one of uniform drudgery—there is a great deal of leisure in the !Kung camp, even in the worst time of the year—it is also true that the !Kung are very thin and complain often of hunger, at all times of the year. It is likely that hunger is a contributing cause to many deaths which are immediately caused by infectious and parasitic diseases, even though it is rare for anyone simply to starve to death."
"The give and take of tangibles and intangibles goes on in the midst of a high level of bickering. Until one learns the cultural meaning of this continual verbal assault, the outsider wonders how the !Kung can stand to live with each other …. People continually dun the Europeans and especially the European anthropologists since unlike most Europeans, the anthropologists speak !Kung. In the early months of my own field work I despaired of ever getting away from continual harassment. As my knowledge of !Kung increased, I learned that the !Kung are equally merciless in dunning each other."
"In reciprocal relations, one means that a person uses to prevent being exploited in a relationship … is to prevent him or herself from becoming a “have”…. As mentioned earlier, men who have killed a number of larger animals sit back for a pause to enjoy reciprocation. Women gather enough for their families for a few days, but rarely more …. And so, in deciding whether or not to work on a certain day, a !Kung may assess debts and debtors, decide how much wild food harvest will go to family, close relatives and others to whom he or she really wants to reciprocate, versus how much will be claimed by freeloaders."
"The !Kung, we are told, spend a great deal of time talking about who has what and who gave what to whom or failed to give it to whom (Wiessner 1982:68). A lot of the exchange and sharing that goes on seems to be as much motivated by jealousy and envy as it is by any value of generosity or a “liberal custom of sharing.” In his survey of foraging societies, Kelly (1995:164-65) notes that “Sharing … strains relations between people. Consequently, many foragers try to find ways to avoid its demands … Students new to anthropology … are often disappointed to learn that these acts of sharing come no more naturally to hunter-gatherers than to members of industrial societies.”"
The Bush People previously called The Pygmies are modern humans who eat the diet of the previous homonids and get stunted by the caloric deficits. The only thing they plant is hemp, which doesnt scale to actual agriculture.
The "original affluent society" theory is based on several false premises and is fundamentally outdated, but people keep it alive because it fits certain Rousseauean assumptions we have. I recommend reading this:
I just read the 'original affluent society' and (most of) your linked essay, I kind of agree with you. That said, the conclusions of Kaplan lead to estimates or 35-60 hours a week (excluding some depending on the group) and that surprised me a lot. That's very different from the image I got from some other comments in this thread talking about extremely long days with constant back-breaking work. Would you agree?
Constant, backbreaking work was not a feature of hunter-gatherer societies in the way it was of early agricultural societies, yes; at the same time, they still worked equal to or longer hours than we did, at things we would likely consider quite grueling and boring (mostly food processing), and what they got out of it was a level of nutrition even they regularly considered inadequate; moreover, a lot of the reason the average per day work estimate is so low, as the paper covers briefly, is that there were very often times, especially during the winter, where food simply wasn't accessible, or during the summer, where it was so hot it was dangerous to work, so there was enforced idleness, but that's not the same thing as leisure.
It's a detailed, complicated anthropological argument made by an expert — and he also does it in a very well-written way. I could attempt to lay out the argument myself, but ultimately everyone would be better served by just... reading the primary source, because I doubt I could do it sufficient justice. I recommend you actually just do the reading. But a general TLDR of the points made are:
- the estimates of how much time hunter-gatherers spent "working" were based on studies that either (a) watched hunter-gatherers in extremely atypical situations (no children, tiny band, few weeks during the most plentiful time of the year, and they were cajoled into traditional living from their usual mission-based lifestyle) or (b) didn't count all the work processing the food so it could even be cooked as time spent providing for subsistence, and when those hours are included, it's 35-60 hours a week of work even including times of enforced idleness pulling down the average
- the time estimates also counted enforced idleness from heat making it dangerous to work, or from lack of availability of food, or from diminishing returns, or from various "egalitarian" cultural cul de sacs, as "leisure" but at the same time...
- ... even the hunter gatherers themselves considered their diet insufficiently nutritious and often complained of being underfed, let alone the objective metrics showing that the were
For a "copy Deepseek's homework" model, it's really good, preferable to DeepSeek for me (at least prior to V3.2, which I haven't been able to fully put through its paces yet). post-training really makes that much of a difference I guess
This is a very good analysis. Timnit Gebru, Emile Torres, and even Hao herself's response to this has been very annoying to me (calling several multiple OOM mistakes that completely reverse the ideologically convenient picture she painted "a typo", whataboutism because the guy who pointed it out is an EA guy, and ignoring most of the criticisms as "philosophical differences" while trying to shift blame for the one issue she does admit, respectively).
Yeah. It's unfortunate that they seem so intent on digging in their heels about this one issue of water use. Precisely because AI is so important, we should want to make sure we're clear on the facts and have the right context for them! And the larger point that AI is likely to consolidate power in the hands of increasingly few people and corporations is well worth making (although, that story can be told for most critical technologies , from the steam engine to the telegraph to the transistor, and I don't think Hao's framing of Empire/Colonialism is really the right way to look at it at all). I think there are plenty of books to be written about the social impact of AI from a more balanced, empirical, less ideological perspective.
Yeah, it is really unfortunate because, like, personally, I think there are a lot of very genuine, very good ethical, social, economic, and material arguments to be made against AI and then dragging their heels on this one transparently ideologically motivated and wrong criticism is just detracting from their credibility, distracting their messaging, and draining their time. And to be clear for me, like, I actually think, yeah, it would be good to have more books written about this, but Empire of AI is actually a pretty good book overall. This stuff it talks about with reinforcement learning workers and the weird bizarre fucked up culture of open AI and stuff are pretty good.
On the other hand, I think Emile P Torres and Timnit Gebru are ideologues that really shouldn't be listened to.
This is just the same flippant dismissive stuff as usual. At this point, it's its own brand of anti-AI slop. Just because LLMs are not deterministic, it does not mean that you can't effectively iterate and modify code it generates. Or that it won't give you something useful almost every time you use it. Also, this article talks about possible use cases of LLMs for learning, and then dismisses them as completely replaceable with a book, a tutorial, or a mentor. Not counting the fact that books and tutorials are not individually tailored to the person and what they want to work on and are interested in can't be infinitely synthesized to take you as far as you want to go and are often limited for certain technologies and mentors are often very difficult to come by.
I get your part about mentors. I came up through having to figure stuff out myself a lot via stack overflow and friends, where the biggest problem for me is usually how to ask the right question (eg with elastic search, having to find and understand "index" vs "store" - once I have those two terms, searching is a lot easier, and without them, it's a bit of a crapshoot). Mentors help here because they had to travel that road too and probably can translate from my description to the correct terms.
And I really wish I could trust an llm for that, or, indeed, any task. But I generally find answers fall into one of these useless buckets:
1. Reword the question as an answer (so common, so useless)
2. Trivial solutions that are correct - meaning one or two lines that are valid, but that I could have easily written myself quicker than getting an agent involved, and without the other detractors on this list
3. Wildly incorrect "solutions". I'm talking about code that doesn't even build because the llm can't take proper direction on which version of the library to refer to, so it keeps giving results based off old information that is no longer relevant. Try resolving a webpack 5 issue - you'll get a lot of webpack 4 answers and none of them will work, even if you specify webpack 5
4. The absolute worst: subtly incorrect solutions that seem correct and are confidently presented as correct. This has been my experience with basically every "oh wow, look what the llm can do" demo. I'm that annoying person who finds the big mid-demo.
The problems are:
1. A person inexperienced in the domain will flounder for ages trying out crap that doesn't work and understanding nothing of it.
2. A person experienced in the domain will spend a reasonable amount of time correcting the llm - and personally, I'd much rather write my own code via tdd-driven emergent design - I'll understand it, and it will be proven to work when it's done.
I see that proponents of the tech often gloss over this and don't realise that they're actually spending more time overall, especially when having to polish out all the bugs. Or maintain the system.
Use whatever you want, but I've got zero confidence in the models, and I prefer to write code instead of gambling. But to each, their own.
The way I see AI coding-agents at the moment is they are interns. You wouldn't give an intern responsibility for the whole project. You need an experienced developer who COULD do the job with some help from interns, but now the AI can be the intern.
There's an old saying "Fire is a good servant, but bad master". I think same applies to AI. In "vibe-coding" AI is too much the master.
But it's the amount and location(?) of the vibes that matters.
If I want to say, create a Youtube RSS hydrator that uses DeArrow to de-clickbait all URLs before they hit my RSS reader.
Level 1 (max vibe) I can either just say that to an LLM hit "go" and hope for the best (maximum vibes on spec and code). Most likely gonna be shit. Might work too.
Level 2 (pair-vibing the spec) is me pair-vibing the spec with an LLM, web versions might work if they can access sites for specs (figuring out how to turn a youtube URL to an RSS feed and how the DeArrow API works)
After the spec is done, I can give it to an agent and go do something else. In most cases there's an MVP done when I come back, depending on how easy said thing is to test (RSS/Atom is a fickle spec and readers implement it in various ways) automatically.
Level 3 continues the pair-vibed spec with pair-coding. I give the agent tasks in small parts and follow along as it progresses, interrupting if it strays.
For most senior folks with experience in writing specs for non-seniors, Level 2 will produce good enough stuff for personal use. And because you offload the time consuming bits to an agent, you can do multiple projects in parallel.
Level 3 will definitely bring the best results, but you can only progress one task at a time.
> And I really wish I could trust an llm for that, or, indeed, any task. But I generally find answers fall into one of these useless buckets: 1. Reword the question as an answer (so common, so useless) 2. Trivial solutions that are correct - meaning one or two lines that are valid, but that I could have easily written myself quicker than getting an agent involved, and without the other detractors on this list 3. Wildly incorrect "solutions". I'm talking about code that doesn't even build because the llm can't take proper direction on which version of the library to refer to, so it keeps giving results based off old information that is no longer relevant. Try resolving a webpack 5 issue - you'll get a lot of webpack 4 answers and none of them will work, even if you specify webpack 5 4. The absolute worst: subtly incorrect solutions that seem correct and are confidently presented as correct. This has been my experience with basically every "oh wow, look what the llm can do" demo. I'm that annoying person who finds the big mid-demo.
This is just not my experience with coding agents, which is interesting. You could chalk this up to me being a bad coder, insufficiently picky, being fooled by plausible looking code, whatever, but I carefully read every diff the agent suggests, force it to keep every diff small enough for that to be easy, and I'm usually very good at spotting potential bugs, and very picky about code quality, and the ultimate test passes: the generated code works, even when I extensively test it in daily usage. I wonder if maybe it has something to do with the technologies or specific models/agents you're using? Regarding version issues, that's usually something I solve by pointing the agent at a number of docs for the version I want and having it generate documentation for itself, and then @'ing those docs in the prompt moving forward, or using llms.txt if available, and that usually works a charm for teaching it things.
> I see that proponents of the tech often gloss over this and don't realise that they're actually spending more time overall, especially when having to polish out all the bugs. Or maintain the system.
I am a very fast, productive coder by hand. I guarantee you, I am much faster with agentic coding, just in terms of measuring the number of days it takes me to finish a feature or greenfield prototype. And I doing corrections are a confounding factor because I very rarely have to correct these models. For some time I used agent that tracks how often as a percentage I accept tool calls including edits the agent suggests. One thing to know about me is that I do not ever accept subpar code. If I don't like an agent's suggestion I do not accept and then iterate; I want it to get it right from the first. My acceptance rate was 95%.
1) Have tried lisp and clojure
2) Liked their elegance and expressiveness
3) Have read through SICP and done most of the exercises
4) Would still choose plain old boring easy-to-read always-second-best Python for 90% of use-cases (and probably Rust for the last 10%) when building a real business in the real world."
This is me to a T — even when I'm building hobby projects. The point of writing any code, for me, is most of all to see a certain idea to fruition, so I choose what will make me most productive getting where I want to go. And while I still worship at the altar of Common Lisp as an incredibly good language, the language matters much less than the libraries, ecosystem, and documentation for productivity (or even effective DSL style abstraction level!), so eventually I have had to make my peace with Python, TypeScript, and Rust.
reply