Hacker Newsnew | past | comments | ask | show | jobs | submit | TacticalCoder's commentslogin

> ... and the mix of ingredients that doesn't trigger any reflux

Ah reflux! I drink way too much coffee since forever and recently asked my doc about it: he told me that if I had no reflux, then I simply shouldn't worry about it. Some people have reflux with coffee, others don't. I drink more coffee than 99% of the population and I get zero reflux. Since decades.

It's a cool article but in a way many coffee became instant coffee: as my coffee machine is often already warm (wife btw she's also a heavy coffee drinker), it's actually more instant to have my full auto coffee machine ground the beans and make a coffee than it'd take to boil water for an instant coffee. Same for the people doing the (very costly compared to beans) capsule coffee thing: it's ultra quick (and one of the reason capsule coffee like Nespresso conquered so many).

P.S: I'll try your mocha trick!


> I drink more coffee than 99% of the population

How much is that?


I could only find an article claiming 4% drink 6+ cups per day so a top 1st percentile coffee drinker must go much further beyond that. I'm guessing at least 2 litres per day.

> And so the paradox is, the LLMs are only useful† if you're Schwartz, and you can't become Schwartz by using LLMs.

It's exactly the same for coding.


> ... my experience with trying to get them to sort out any kind of issue with their services makes me reluctant to spend any money with them.

When you pay for Google Workspace you are the client, not the customer and they do answer phone calls for support. The only two times my wife and I needed them for our SMEs, they picked up the phone and helped us resolve our issues. Super professional too. Haven't needed to give them a call in something like 8 years now.

Don't know about Pixel phone and Google One subscriptions but for SMEs Google Workspace is a godsend: it's incredibly cheap per employee and it's the way out of the Microsoft mediocrity. Everything only requires a browser, no matter the OS (wife works from Linux and now added a Mac Mini, for example): Windows can, at long last, get the middle finger in SMEs.

I'll forever be thankful to Google for allowing me to help many people get rid of Microsoft products, including Windows.


> They're going to try to gradually push laws to make it so that you'll need a government issued signature to do anything.

And in the EU it's already nearly the case. The dystopian horror that KYC/AML has become for honest citizens is beyond belief. And they're of course hiding behind the excuse that "bad guys are laundering money": but going after actual drug dealers, of course they're not doing that. We now have articles wondering if Belgium (where most of the EU institutions do live and where all these totalitarian laws are passed) has become a "narco-state" (where criminals make the rules).

People's life can be ruined when some employee, somewhere, decides he wants to bumps his SAR quota (Suspicious Activity Report): you can have a real-estate transaction fail (and have hence moreover to pay a 10% penalty to the other party) if either a notary, bank employee, real-estate agency employee decided that they've got the nostalgy of the Gestapo-time and decided to act like a good little nazi (yes, Godwin's law: for we're literally talking about totalitarism).

I recently had an notary's employee bother my brother for the source of funds when he bought an apartment... A quarter of a century ago. A quarter of a century ago and he was talking to my brother as if he was a criminal for he didn't have access anymore to the bank wire transfer from 25+ years ago. It's crazy for the exact same controls had already been done 25+ years ago when he bought the apartment. And the notary's employee fully knows that. (regarding that case my brother is currently looking into the national federation of notaries and he's going to file a complaint: he's got emails from that notary's employee that are totally out of line).

The problem is way too much power over the lives of others is put into the hands of petty people: petty bank employees, petty notary employees, petty public servants. The same kind of people who were all too happy to out jews during WWII and who were making sure trains would leave on time.

I previously had a folder where every single money transfer of more than 10 K EUR was saved: I know do it for every transfer below 5 K EUR. And these are to be kept forever for I know that me or my wife or my daughter shall invariably meet motherfuckers asking them "proof of the source of funds from 30 years ago when your father bought that collectible car" (worth less than 20 K back then btw, but worth 6 digits now).

Just fuck these systems and fuck anyone working on it and fuck all the nazis participating in it.


> What use is there in display frame rates above 60 fps?

On a CRT monitor the difference between running at 60 Hz and even a just slightly better 72 Hz was night and day. Unbearable flickening vs a much better experience. I remember having some little utility for Windows that'd allow the display rate to be 75 (not 72 but 75). Under Linux I was writing modelines myself (these were the days!) to have the refresh rate and screen size (in pixels) I liked: I was running "weird" resolutions like 832x604 @ 75 Hz instead of 800x600 @ 60 Hz, just to gain a little bit more screen real estate and better refresh rate.

Now since monitors started using flat panels: I sure as heck have no idea if 60 fps vs 120 fps or whatever change anything for a "desktop" usage. I don't think the problem of the image fading too quickly at 60 Hz that CRT had is still present. But I'm not sure about it.


120 FPS vs 60 FPS is definitely noticeable for desktop use. Scrolling and dragging are night and day, but even simple mouse cursor movement is noticeably smoother.

I had a machine (an AMD 3700X with 32 GB of RAM and a fast NVMe SSD) on which I used to run Debian. Then about 2.5 years ago I bought a new one and gave my wife the 3700X: I figured out she'd be more at ease so I installed Ubuntu on it.

I couldn't understand why everything was that slow compared to Debian and didn't want to bother looking into it so...

After a few weeks: got rid of Ubuntu, installed her Debian. A simple "IceWM" WM (I use the tiling "Awesome WM" but that's too radical for my wife) and she loves it.

She basically manages her two SMEs entirely from a browser: Chromium or Firefox (but a fork of Firefox would do too).

It works so well since years now that for her latest hire she asked me to set her with the same config. So she's now got one employee on a Debian machine with the IceWM WM. Other machines are still on Windows but the plan is to only keep one Windows (just in case) and move the other machines to Debian too.

Unattended upgrades, a trivial firewall "everything OUT or IN but related/established allowed" and that's it.


I had used ubuntu back in the day, and when I came back to linux a bit ago I immediately installed it again.

I don't remember all of my frustrations, but I remember having a lot of trouble with snap. Specifically, it really annoyed me that the default install of firefox was the snap version instead of native. I want that to be an opt-in kind of thing. I found that flatpak just worked better anyway.

I almost tried making the switch to arch, but I've been pretty happy running debian sid (unstable) since. The debian installer is just more friendly to me for getting encrypted drives and partitions set up how I want.

It's not for everyone, but I like the structured rolling updates of sid and having access to the debian ecosystem too much to switch to something else at this point.

I use sway with a radeon card for my primary and have a secondary nvidia card for games and AI stuff.

It has its warts, but I love my debian+sway setup


> I wonder what adaptations will be necessary to make AIs work better on Lisp.

Some are going to nitpick that Clojure isn't as lispy as, say, Common Lisp but I did experiment with Claude Code CLI and my paid Anthropic subscription (Sonnet 4.6 mostly) and Clojure.

It is okay'ish. I got it to write a topological sort and pure (no side effect) functions taking in and returning non-totally-trivial data structures (maps in maps with sets and counters etc.). But apparently it's got problems with...

... drumroll ...

The number of parentheses. It's so bad that the author of figwheel (a successful ClojureScript project) is working on a Clojure MCP that fixes parens in Clojure code spoutted by AI (well the project does more than that, but the description literally says it's "designed to handle Clojure parentheses reliably").

You can't make that up: there's literally an issue with the number of closing parens.

Now... I don't think giving an AI access to a Lisp REPL and telling it: "Do this by bumping on the guardrails left and right until something is working" is the way to go (yet?) for Clojure code.

I'm passing it a codebase (not too big, so no context size issue) and I know what I want: I tell it "Write a function which takes this data structure in and that other parameter, the function must do xxx, the function must return the same data structure out". Before that I told it to also implement tests (relatively easy for they're pure functions) for each function it writes and to run tests after each function it implements or modify.

And it's doing okay.


I think you're right. Try asking GPT-5 this:

> Are the parentheses in ((((()))))) balanced?

There was a thread about this the other day [1]. It's the same issue as "count the r's in strawberry." Tokenization makes it hard to count characters. If you put that string into OpenAI's tokenizer, [2] this is how they are grouped:

Token 1: ((((

Token 2: ()))

Token 3: )))

Which of course isn't at all how our minds would group them together in order to keep track of them.

[1] https://news.ycombinator.com/item?id=47615876 [2] https://platform.openai.com/tokenizer


This is mostly because people wrongly assume that LLMs can count things. Just because it looks like it can, doesn't mean it is.

Try to get your favourite LLM to read the time from a clock face. It'll fail ridiculously most of the time, and come up with all kinds of wonky reasons for the failures.

It can code things that it's seen the logic for before. That's not the same as counting. That's outputing what it's previously seen as proper code (and even then it often fails. Probably 'cos there's a lot of crap code out there)


Don’t ask the LLM to do that directly: ask it to write a program to answer the question, then have it run the program. It works much better that way.

But for lisp, a more complex solution is needed. It's easy for a human lisp programmer to keep track of which closing parentheses corresponds to which opening parentheses because the editor highlights parentheses pairs as they are typed. How can we give an LLM that kind of feedback as it generates code?

That's a different question than the one you asked. Are you saying LLMs are generating invalid LISP due to paren mismatching?

If the LLM is intelligent, why can’t it figure out on its own that it needs to write a program?

The answer is self-evident.

does the ai performance drop if it uses letters for tokens rather than tokens for tokens?

Try asking an LLM a question like "H o w T o P r o g r a m I n R u s t ?" - each letter, separated by spaces, will be its own token, and the model will understand just fine. The issue is that computational cost scales quadratically with the number of tokens, so processing "h e l l o" is much more expensive than "hello". "hello" has meaning, "h" has no meaning by itself. The model has to waste a lot of computation forming words from the letters.

Our brains also process text entire words at a time, not letter-by-letter. The difference is that our brains are much more flexible than a tokenizer, and we can easily switch to letter-by-letter reading when needed, such as when we encounter an unfamiliar word.


I am lazy: when an LLM messes up parenthesis when working with any Lisp language I just quickly fix the mismatch myself rather than try to fix the tooling.

Sometimes LLMs astonish me with what the code they can write. Other times I have to laugh or cry.

As an example, I asked claude 3.5 back when that was the latest to indent all the code in my file by four more spaces. The file was about 700 lines long. I got a busy spinner for two minutes then it said, "OK, first 50 lines done, now I'll do the rest" and got another busy spinner and it said, "this is taking too long. I'm going to write a program to do it", which of course it had no problem doing. The point is that it is superhuman at some things and completely brain-dead about others, and counting parens is one of those things I wouldn't expect it to be good at.


I think LLMs are great at compression and information retrieval, but poor at reasoning. They seem to work well with popular languages like Python because they have been trained with a massive amount of real code. As demonstrated by several publications, on niche languages their performance is quite variable.

I used to find it better to shortcut the AI by asking it to write python to do a task. Claude 4.6 seems to do this without prompting.

Edit: working on a lot of legacy code that needs boring refactoring, which Claude is great at.


That's you at the time not knowing LLM fundamentals with regards to context management.

That was me at the time kicking the tires to understand what it was good at or not. If I actually wanted to indent a file by four spaces it would take me less time in my editor than to prompt the LLM to do it, even if the LLM had been capable of it.

I had that issue with the AI doing some CL dabbling.

Things, on the whole, were fine, save for the occasional, rogue (or not) parentheses.

The AI would just go off the rails trying to solve the problem. I told it that if it ever encountered the problem to let me know and not try to fix it, I’d do it.


AIUI in that thread they're saying "0.51x" the perf on a 96-core arm64 machine and they're also saying they cannot reproduce it on a 96-core amd64 machine.

So it's not going to affect everybody both running PostgreSQL and upgrading to the latest kernel. Conditions seems to be: arm64, shitloads of core, kernel 7.0, current version of PostgreSQL.

That is not going to be 100% of the installed PostgreSQL DBs out there in the wild when 7.0 lands in a few weeks.


It's a huge issue of ARM based systems, that hardly anyone uses or tests things on them (in production).

Yes, Macs going ARM has been a huge boon, but I've also seen crazy regressions on AWS Graviton (compared to how its supposed to perform), on .NET (and node as well), which frankly I have no expertise or time digging into.

Which was the main reason we ultimately cancelled our migration.

I'm sure this is the same reason why its important to AWS.


Macs are actually part of pain point with ARM64 Linux, because the Linux arm set er tend to use 64 kB pages while Mac supports only 4 and 16, and it causes non trivial bugs at times (funnily enough, I first encountered that in a database company...)

It was later reproduced on the same machine without huge pages enabled. PICNIC?

Yes, I did reproduce it (to a much smaller degree, but it's just a 48c/96t machine). But it's an absurd workload in an insane configuration. Not using huge pages hurts way more than the regression due to PREEMPT_LAZY does.

With what we know so far, I expect that there are just about no real world workloads that aren't already completely falling over that will be affected.


So why does it happen only with hugepages? Is the extra overhead / TLB pressure enough to trigger the issue in some way? Of is it because the regular pages get swapped out (which hugepages can't be)?

I don't fully know, but I suspect it's just that due to the minor faults and tlb misses there is terrible contention with the spinlock, regardless of the PREEMPT_LAZY when using 4k pages (that easily reproducible). Which is then made worse by preempting more with the lock held.

So perhaps this is a regression specifically in the arm64 code, or said differently maybe it’s a performance bug that has been there for a long time but covered up by the scheduler part that was removed?

The following messages concluded that using huge pages mitigates the regression, while not using huge pages reproduces it.

Could be either of those, or something else entirely. Or even measurement error.

Turns out the amd machine had huge tables enabled and after disabling those the regression was there on and too. So arm vs amd was a red herring.

Of course not a nice regression but you should not run PostgreSQL on large servers without huge pages enabled so thud regression will only hurt people who have a bad configuration. That said I think these bad configurations are common out there, especially in containerized environments where the one running PostgreSQL may not have the ability to enable huge pages.


Still that huge a regression that affects multiple platforms doesn't sound too neat, did they narrow down the root cause?

That should be obvious to anyone who read the initial message. The regression was caused by a configuration change that changed the default from PREEMPT_NONE to PREEMT_LAZY. If you don’t know what those options do, use the source. (<https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...>)

Yes, I had a good laugh at that. It might technically be a regression, but not one that most people will see in practice. Pretty weird that someone at Amazon is bothering to run those tests without hugepages.

I doubt they explicitly said "I'll run without huge pages, which is an important AWS configuration". They probably just forgot a step. And "someone at Amazon" describes a lot of people; multiply your mental probability tables accordingly.

The number of people at Amazon is pretty much irrelevant; the org is going to ensure that someone is keeping an eye on kernel performance, but also that the work isn’t duplicative.

Surely they would be testing the configuration(s) that they use in production? They’re not running RDS without hugepages turned on, right?


> The number of people at Amazon is pretty much irrelevant; the org is going to ensure that someone is keeping an eye on kernel performance, but also that the work isn’t duplicative.

I'd guess they have dozens of people across say a Linux kernel team, a Graviton hardware integration team, an EC2 team, and a Amazon RDS for PostgreSQL team who might at one point or another run a benchmark like this. They probably coordinate to an extent, but not so much that only one person would ever run this test. So yes it is duplicative. And they're likely intending to test the configurations they use in production, yes, but people just make mistakes.


For production Postgres, i would assume it’s close to almost no effect?

If someone is running postgres in a serious backend environment, i doubt they are using Ubuntu or even touching 7.x for months (or years). It’ll be some flavor of Debian or Red Hat still on 6.x (maybe even 5?). Those same users won’t touch 7.x until there has been months of testing by distros.


Ubuntu is used in many serious backend environments. Heroku runs tens of thousands (if not more) instances of Ubuntu on its fleet. Or at least it did through the teens and early 2020s.

https://devcenter.heroku.com/articles/stack


Do they upgrade to the new LTS the day it is released?

Ubuntu's upgrade tools wait until the .1 release for LTSes, so your typical installation would wait at least half a year.

Not historically.

and they are right, this is because a lot of junior sysadmins believe that newer = better.

But the reality:

  a) may get irreversible upgrades (e.g. new underlying database structure) 
  b) permanent worse performance / regression (e.g. iOS 26)
  c) added instability
  d) new security issues (litellm)
  e) time wasted migrating / debugging
  f) may need rewrite of consumers / users of APIs / sys calls
  g) potential new IP or licensing issues
etc.

A couple of the few reasons to upgrade something is:

  a) new features provide genuine comfort or performance upgrade (or... some revert)
  b) there is an extremely critical security issue
  c) you do not care about stability because reverting is uneventful and production impact is nil (e.g. Claude Code)
but 99% of the time, if ain't broke, don't fix it.

https://en.wikipedia.org/wiki/2024_CrowdStrike-related_IT_ou...


On the other hand, I suspect LLMs will dramatically decrease the window between a vulnerability being discovered and that vulnerability being exploited in the wild, especially for open-source projects.

Even if the vulnerability itself is discovered through other means than by an LLM, it's trivial to ask a SOTA model to "monitor all new commits to project X and decide which ones are likely patching an exploitable vulnerability, and then write a PoC." That's a lot easier than finding the vulnerable itself.

I won't be surprised if update windows (for open source networked services) shrink to ~10 minutes within a year or two. It's going to be a brutal world.


Too often I see IT departments use this as an excuse to only upgrade when they absolutely have to, usually with little to no testing in advance, which leaves them constantly being back-footed by incompatibility issues.

The idea of advanced testing of new versions of software (that they’ll be forced to use eventually) never seems to occur, or they spend so much time fighting fires they never get around to it.


all fair points, on the other hand, as a general rule, isn't it important to stay on currently-supported versions of pieces of software that you run?

ymmv, but in my experience projects like postgresql which have been reliable, tend to continue to be so.


There is serious as in "corporate-serious" and serious as in "engineer-serious".

I’ve seen more 5k+-core fleets running Ubuntu in prod than not, in my career. Industries include healthcare, US government, US government contractor, marketing, finance.

In other words, those industries that used to run windows before ?

A customer of mine is running on Ubuntu 22.04 and the plan is to upgrade to 26.04 in Q1 2027. We'll have to add performance regression to the plan.

Are you running ARM servers?

"While living in the United States, she promoted Iranian regime propaganda, celebrated attacks against American soldiers and military facilities in the Middle East, praised the new Iranian Supreme Leader, denounced America as the “Great Satan,” and voiced her unflinching support for the Islamic Revolutionary Guard Corps, a designated terror organization."

What a bunch of great individuals! The UN warned, two days ago, that in 2026 the Islamic guards in Iran executed more people already than the average in a year. Some 600 people. That's not even talking about the 30 000+ who were slaughtered for manifesting against the regime.

Does anyone have anything to say to criticize ICE about those removals? I mean, really: what is the argument of those saying that ICE shouldn't deport those who, on US soil, hail the great deeds of islamists terrorists?


I don't endorse their views, but "hailing the great deeds of Islamic terrorists" is protected speech.

Lying about your asylum claims (these individuals traveled back to Iran, twice, for vacation) is in fact illegal.

That is an important piece of context that isn't mentioned in the linked article. Instead, the state department makes it seem like they were kicked out for espousing opinions that the government doesn't like.

More: Sounding like a shill for the Iranian regime causes questions about the legitimacy of the asylum claims, completely apart from any travel.

Studies related to vitamin D supplementation are the most important because in many cases the correlation between low vit D and "lots of bad stuff" is established. But what's not always known is if supplementation helps or not.

Now as I'm a simple man and as I know that there aren't side-effects to vit D supplementation [1], I take my supplements.

[1] except in the mind of intellectually dishonest people who shall take that one case of a person who took 10 000 IU of vit D per day for 10 years and ended having this or that (non-life threatening) issue. But these intellectually dishonest people would have no problem telling you to "be careful while drinking water, for one person who drank 20 liters of water per day got into trouble!", so do like me: ignore these pharma-lab paid shills.


Of course adequate vitamin D3 softgel supplementation helps, for the simple reason that it almost always effectively raises the blood level of vitamin D, thereby maintaining its sufficiency. Those who struggle with such simple logic will struggle hard in life.

Elderly people or those with heavy sunlight exposure or unhealthy kidneys or magnesium deficiency can exceed the target blood range of vitamin D even with 5K IU of vitamin D3 per day. I consider anything over 60 ng/mL to be in a strict excess of the target range.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: