The fact that you, a random internet guy, knows this makes me believe that the world's top experts on measuring this exact thing probably thought of it, and if there is a mysterious divergence this probably doesn't account for it.
I figure as much, but that's kinda why I'm asking what made it mysterious. I actually hadn't heard of it being seen as mysterious until today - that comment is the first time I've seen that. Until today I thought it had been was unsurprising for scientists.
> The reason for this drift has eluded physicists who have dedicated their careers to the SI unit of mass. No plausible mechanism has been proposed to explain either a steady decrease in the mass of the IPK, or an increase in that of its replicas dispersed throughout the world
I don't disagree with you, but on the other hand: if people claim something in a discussion thread we need to be able to ask them to back those claims up. Otherwise anyone can just assert anything.
Sure, the explanation might not fit in the margins here, but then a link to a source that does attempt to explain it would be fine too.
I think the real question is: where is the evidence that scientists find it mysterious? That doesn't need any sort of explanation, just a reference for a claim made by a commenter. As with the other commenter, I've heard about the mass divergence before, but never that anyone found it surprising.
As has even been discussed on HN over the years (!), the cause is unknown.* This is one of the motivations to replace the official kilogram with a definition based on measurable natural quantities, as had been done with the second and the meter. It took until 2018 to do so: https://www.nist.gov/si-redefinition/kilogram-introduction
* it’s not like there aren’t good theories but you can’t experiment directly on “the” kg because you can’t risk changing it!
I'm neither seeing the cause being mentioned as unknown in that link, nor would the cause being unknown imply anything being mysterious?
Like if you find that your car has broken down after 200k miles, you might not be able to determine the cause, but it wouldn't exactly be some kind of mysterious physical phenomenon that would puzzle scientists. Obviously, something wore out. Why/how is this any different?
Did you click on the other kilogram links (or other metric links which are also interesting?)
A useful paragraph is the following: "The trend during the past century had been for most of BIPM's official copies to gain mass relative to the IPK, although by somewhat different amounts, averaging around 50 micrograms (millionths of a gram) over 100 years. But an alternative explanation is that the IPK was losing mass relative to its copies. Even more likely, it was a combination of both."
Those pages are for the lay audience but you can do your own web search (probably even an algolia search of HN) to know more.
In general, it's often easy to come up with an explanation of some new phenomenon, but to answer the question you still have to do science...which often simply confirms intuition but not always. My comment was less on the reason and more on the difficulty of doing experiments on what was literally the kg, no matter how it fluctuated.
My theory: People are treating them with different degrees of reverence. They are careful to polish anything off the IPK, causing it to lose weight gradually through abrasion. The others gain wait through random adhesions, perhaps oils picked up by gloves used to handle them.
I'm just some random guy on the internet, I know. It's fun to have theories!
One reason why questions like this are important is because it gives the audience around the asker an opportunity to learn something new. Isn't it important to be able to express curiosity, especially for the benefit of others?
Isn't it more like: "exploitation of a market inefficiency renders the market efficient over time", or something to that effect? Or rather, (others) making money from trading is what makes it hard (for you) to make money from trading.
Sure, depends on how you encounter it. Most often on forums someone will show up wanting to hear about how to write a strategy, and they are rebuffed with "nobody would ever publish that".
They're not being arrogant, as it seems like the parent is genuinely curious as to why such weights diverged, as am I. It is still a good question to ask, since I also have not heard a good answer to that question.
I find that it saves time to just start by assuming the experts haven't thought of X because of how many times I've seen assuming that they _have_ thought of X turn out to be a poor assumption, across many domains.
I don't really agree with the OP, but I do think there is at least one, possibly two such examples. The pretty clear one is nutrition: the vast majority of studies and recommendations made over the years are pure bullshit, and quite transparently so. They either study a handful of people in detail, or a huge swathe of population in aggregate, and get so many confounding variables that there is 0 explanatory power in any of them. This is quite obvious to anyone, but the field keeps churning out papers and making official recommendations as if they know anything more about nutrition than "missing certain key nutrients can cause certain disease, like scurvy for missing vitamin C".
Nutrition in particular is a scenario where major corporations willfully hid research about sugar and things for years and years and funded research attacking fat content instead, which turns out is actually pretty benign. Perfect example.
Can't speak for OP, but I've had more than a few similar experiences (from both sides of the fence FWIW).
I can think of one example in software deployment frequency. The observation (many years ago), was that it's painful and risky (therefore, expensive) to deploy software, so we should do it as infrequently as the market will allow.
Many companies used to be on annual release schedules, some even longer. Many organizations still resist deploying software more than every couple/few weeks.
~15 years ago, I was working alongside the other (obviously ignorant) people who believed that when something is painful, slow and repetitive, it should be automated. We believed that software deployment should happen continuously as a total non-event.
I've had to debate this subject with "experts" over and over and over again, and I've never met a single person who, once migrated, wanted to go back to the nightmare of slow, periodic software deployments.
I don't see why a slow deployment cadence is a nightmare. When I've worked in that setting, it mostly didn't matter to me when something got deployed. When it did (e.g. because something was broken), we had a process in place to deploy only high priority fixes between normal releases.
Computers mostly just continue to work when you don't change anything, so that meant after the first week or so after a release, the chance of getting paged dropped dramatically for 3 months.
Good question. The nightmare was mostly organizational.
The amount of politicking was incredible when it came to which features would be in the next push and which features would slip. The planning meetings, the arguments, the capability slashing, the instability that came from all these political decisions. It was not great and this enormous amount of churn literally disappeared when they moved to daily pushes.
That's more "the experts had a (wrong) opinion on something" than "the experts overlooked something obvious". They didn't overlook it, they thought about it and came to a conclusion.
And if by "many years ago" you refer to a period where software deployment was mostly offline and through physical media, then it was indeed painful and risky (and therefore expensive). The experts weren't wrong back then.
This isn't to agree with the parent comment, but wouldn't this situation itself be an answer to your question (assuming the claim is true)? Laymen like me easily anticipated mass divergence, but purportedly scientists have been surprised by it.
The procedure of multiple weights being calibrated against a single standard is _predicated_ on anticipated mass divergence.
The mystery being discussed is that, even after the obvious sources of error are allowed for, there is still a discrepancy, and it's not easy to determine how much of that discrepancy is with the weights being recalibrated vs the test standard they're being calibrated to. None of which is shocking to anyone involved, just puzzling.
I think that they do not have an exact reason and measured it and seen it happen is the surprising bit. Anything else is a good guess. Of those, people have plenty.
* [insert every example of "15 year old unknown vulnerability in X found" here]
* have to be a bit vague here, but while working as a research scientist for the US Department of Defense I regularly witnessed and occasionally took part in scenarios where a radical idea turned "expert advice" on its head, or some applied thing completely contradicted established theoretical models in a novel or interesting way. Consistently, the barrier to such advancements was always "experts" telling you that your thing should not / could not work, blocking your efforts, withholding funding, etc., only to be proven wrong. Far too many experts care more about maintaining the status quo than actually advancing the field, and a concerning number are actually on the payroll of various corporations or private interests to actively prevent such advancements.
* over the last 30 years in the AI field, there have been a few major inflection points, Yann LeCun's convolutional neural networks and his more general idea that ANNs stood to gain something by loosely mimicking the complexity of the human brain, for which he was originally ridiculed and largely ignored by the scientific community until convolution revolutionized computer vision; and the rise of large language models, which came out of a whole branch of AI research that had been disregarded for decades and was definitely not seen as a thing that might ever come close to something like AGI, natural language processing.
* going back further in history there are plenty of examples, like quantum mechanics turning the relativistic model on its head, Galileo, etc etc. The common theme is a bunch of conservative, self-described experts scoffing at something that ends up completely redefining their field and making them ultimately look pretty silly and petty. This happens so frequently in history that I think it should just be assumed at all times in all fields, as this dynamic is one of the few true constants throughout history. No one is an expert, no one has perfect knowledge of everything, and the next big advancement will be something that contradicts conventional wisdom.
Admittedly, I derived these beliefs from some of the Socratic teachings I received very early in life, around 6th grade or so back in the late 90s, but they have continually borne fruit for me. Question everything. When debugging, question your most basic assumptions first., "is it plugged in?" etc, etc
It's sort of at a point these days where if you want to find a fruitful research idea, probably best to just browse through conventional wisdom on your topic and question the most fundamental assumptions until you find something fishy.
I missed this comment first time around, but I really appreciate this write-up.
I apologize for being a bit snide in my original challenge, I'm fairly sensitive to the "why don't you just" attitude, but I agree with pretty much everything you have to say here.
I have a very similar approach around enumerating and testing assumptions when the going gets tough, and similarly have found that has enabled me to solve a handful of problems previously claimed impossible.
I think the tautological issue with our initial framing is that if you're able to easily identify these problems you probably are a subject matter expert. In many ways it's the outsider art of analytical problem solving - established wisdom should not be sacred.
Can they perform the calculations to estimate the fluctuations? Can they write an informal explanation about what happens to the gluons? I think the experience of seeing a factoid on the internet is being given too much weight here.