Hacker Newsnew | past | comments | ask | show | jobs | submit | SCdF's commentslogin

Blockchain as a vehicle for immutable data has passed. Crypto has given up pretending it's anything other than a financial vehicle for gambling.

Also, the recruitment attempts I've gotten from crypto have completely disappeared compared to the peak (it's all AI startups now).


You should still use swap. It's not "2x RAM" as advice anymore, and hasn't been for years: https://chrisdown.name/2018/01/02/in-defence-of-swap.html

tl;dr; give it 4-8GB and forget about it.


I've heard "square root of physical memory" as a heuristic, although in practice I use less than this with some of my larger systems.

The proper rule of thumb is to make the swap large enough to keep all inactive anonymous pages after the workload has stabilized, but not too large to cause swap thrashing and a delayed OOM kill if a fast memory leak happens.

That's not so much a rule of thumb as an assessment you can only make after thorough experimentation or careful analysis.

It doesn't take that much experimentation, though. Either set up not enough swap and keep increasing it by a little bit until you stop needing to increase it, or set up too much, and monitor your max use for a while (days/weeks), and then decrease it to a little more than the max you used.

I went with "set up 0 swap" and then never needed to increase it. I built my PC in 2023, when RAM prices were still reasonable, stuck 128GiB of ECC DDR5 in, and haven't run into any need for swap. Start with 0, turn on zswap, and if you don't have enough RAM then make a swap file & set it up as backing for zswap.

You don't need "horough experimentation or careful analysis". Just keep free swap space below few hundred megabytes but above zero.

"Keep swap space below few hundred megabytes but above zero" is a good example of a rule of thumb.

"Make the swap large enough to keep all inactive anonymous pages after the workload has stabilized, but not too large to cause swap thrashing and a delayed OOM kill if a fast memory leak happens" is not.


You need to take every comment about AI and mentally put a little bracketed note beside each one noting technical competence.

AI is basically an software development eternal september: it is by definition allowing a bunch of people who are not competent enough to build software without AI to build it. This is, in many ways, a good thing!

The bad thing is that there are a lot of comments and hype that superficially sound like they are coming from your experienced peers being turned to the light, but are actually from people who are not historically your peers, who are now coming into your spaces with enthusiasm for how they got here.

Like on the topic of this article[0], it would be deranged for Apple (or any company with a registered entity that could be sued) to ship an OpenClaw equivalent. It is, and forever will be[1] a massive footgun that you would not want to be legally responsible for people using safely. Apple especially: a company who proudly cares about your privacy and data safety? Anyone with the kind of technical knowledge you'd expect around HN would know that them moving first on this would be bonkers.

But here we are :-)

[0] OP's article is written by someone who wrote code for a few years nearly 20 years ago.

[1] while LLMs are the underlying technology https://simonwillison.net/tags/lethal-trifecta/


Presumably the Epstein files, but I'm not on twitter so not sure


Yet somehow there is always a version of the same thread that's not mangled https://www.jmail.world/thread/EFTA02512795?view=inbox

Can only assume DOJ overpaid the law firms like 5x by not specifying deliveries need to be deduplicated first.


Huh, Noam Chomsky, nice one!

Ooh, that reason. Sorry for having been dense. Thanks!

Jeff Epstein? The New York financier?

It's not clear why this is being upvoted.

There is not a sample chapter to check, the two authors own the website and appear to be just some developers who know go, and frankly the cover looks AI generated.

As a go developer, why am I supposed to care about this?


So erm, is this a good thing?

I've only been using it for a month, but it quickly becomes incredibly laggy. I have to kill it completely and reopen it[1]. I've also had some unique hard locks never before seen on my machine[2], only while it's open and doing things.

I also haven't noticed it outputting anything that would warrant it being more complex than ncurses or the current fashionable equivalent?

[1] /clear helps but not 100%, and the time to get back to lag land increases each time

[2] (Framework 13" AMD 7840U 32GB RAM)


It’s not a good thing, because it’s not a thing. The title is a sarcastic reference to the fact that it uses an order of magnitude more instructions per frame than SM64 did

Sorry, I was referring less to the sarcastic elements of this article and more the not sarcastic elements of the Claude Code engineer's post on Twitter / Bsky.

Ah, I understand. Yeah, it read to me as a "good thing" in the sense of "look, it's slow, but it has to be slow". That the complexity was inherent and therefore in some sense good.

Locking the door of the cockpit, actual on the ground policing in terms of monitoring terror cells.

I am getting workable code with Claude on a 10kloc Typescript project. I ask it to make plans then execute them step by step. I have yet to try something larger, or something more obscure.

Most agents do that by default now.

I feel like there is a nuance here. I use GitHub Copilot and Claude Code, and unless I tell it to not do anything, or explicitly enable a plan mode, the LLM will usually jump straight to file edits. This happens even if I prompt it with something as simple as "Remind me how loop variable scoping works in this language?".

This. I feel like folks are living in two separate worlds. You need to narrow the aperture and take the LLm through discrete steps. Are people just saying it doesn't work because they are pointing it at 1m loc monoliths and trying to oneshot a giant epic?

AI was useless for me on a refactor of a repo 20k loc even after I gave examples of the migrations I wanted in commits.

It would correctly modify a single method. I would ask it to repeat for next and it would fail.

The code that our contractors are submitting is trash and very high loc. When you inspect it you can see that unit tests are testing nothing of value.

   when(mock.method(foo)).thenReturn(bar)
   assert(bar == bar)
stuff like that

its all fake coverage, for fake tests, for fake OKRs

what are people actually getting done? I've sat next to our top evangelist for 30 minutes pair programming and he just fought the tool saying something was wrong with the db while showing off some UI I dont care about.

like that seems to be the real issue to me. i never bother wasting time with UI and just write a tool to get something done. but people seem impressed that AI did some shitty data binding to a data model that cant do anything, but its pretty.

it feels weird being an avowed singularitarian but adamant that these tools suck now.


I'm using Claude in a giant Rust monorepo. It's really good at implementing HTTP handlers and threaded workers when I point it at prior examples.

This is the fracture in the industry I don't think we are talking about enough.

It overwhelms everyone's ability to keep track of what it's doing. Some people are just no longer keeping track.

I have no idea if people are just doing this to toy projects, or real actual production things. I am getting the sneaking suspicion it's both at this point.


Orchestration buys parallelism, not coherence. More agents means more drift between assumptions. Past a point you're just generating merge conflicts with extra steps.

Look I have no idea if this is related, but I have noticed recently, talking to other developers, that the addiction / allure of the speed that coding with AI agents gives you is leading to a relaxation of their standard quality bar. This doesn't even feel like the evil overlords whipping them more, it is self-inflicted.

When you can get multiple different agents to all work on things and you are bouncing between them, careful review of their code becomes the bottleneck. So you start lowering your bar to "good enough", where "good enough" is not really good enough. It's a new good enough, which is like you squinting at the code and as long as the shape is vaguely ok, and the code works (where that means you click around a bit and it seems fine), it's ok.

Over time you lose your "theory"[1] of the software, and I would imagine that makes you effectively lower your bar even further, because you are less attached to what good should look like.

This is all anecdotal on my end, but it does feel like quality as a whole in the industry has tanked in the last maybe 12 months? It feels like there are more outages than normal. I couldn't find a good temporal outage graph, but if you trust this: https://www.catchpoint.com/internet-outages-timeline , the number of outages in 2025 is orders of magnitude up on 2024.

Maybe this is because there are way more, maybe this is because they are now tracking way more, I'm not sure. But it definitely _feels_ like we are in for a bumpy ride over the next few years.

[1] in the Programming as Theory Building sense: https://gareth.nz/ai-programming-as-theory-building.html


Exactly, half of a system exists as code but the other half exists as a mental model in the minds of the devs. With AI the former will, now much more quickly, outrun or deviate from the latter and then the problems of long-term reliability, maintainability and confidence in validation and delivery are just beginning.

Exactly right and by the time you get that theory back could you have just it all yourself?

Do you get the impression the industry is caring about quality vs ‘good enough’ + cutting costs?

"The industry" is not that homogeneous. B2B Software firms keeping their products in maintenance for decades are very different from your B2C mobile gaming app creator.

Agreed. Combine with job rewards for "impact" over anything else and move fast break things its hardly a surprise.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: