There is a phrase I like: don't fail with abandon. Just because the NSA broke public trust doesn't make it ok for anything like it to happen again.
This data breach from DOGE is worse in many ways. DOGE employees / contractors are have fewer scruples and guardrails. This data has been used primarily for Trump-and-Company's advantage. All to the detriment of American values, such as being for democracy and reasonable capitalism while standing against authoritarianism and kleptocracy.
The NSA's bulk metadata collection, while later found to violate FISA and likely unconstitutional, operated under a formal legal architecture: statutory authorization via Section 215 of the PATRIOT Act (from 2006 onward), FISA Court orders renewed approximately every 90 days, and at least nominal congressional oversight — though most members were kept uninformed of the program's scope until 2013.
With the exception of financial or economic analysis [1], relatively few journalists get paid to write with quantitative nuance, much less awareness of probability. These articles are rarely written to prioritize harm or predict the future. Instead, everything feels like an emergency. [2]
There is a large disconnect between the amount of combined effort and thinking that goes into prediction markets versus the glibness of which so many people write about them. It is almost as if people lose the plot — in capitalist countries, many financial flows are heavily informed by futures markets which share many of the same characteristics as prediction markets.
[1]: also: fraud investigations or other areas where rigor is expected
[2]: Even the better sources such as The Atlantic are somewhat advertising fueled, driven towards “engaging” content, prioritizing interesting ideas rather than practical relevance, dumbed down to maybe a high school reading level, hardly a trace of showing one’s methods. I don’t think I’ve ever seen any backing analysis in the form of a spreadsheet or (heaven forbid!) source code of a simulation. This is not meant to point fingers at writers or journalists; we just have to recognize the context they live in. If we want detailed and careful analysis, we need to find ways to build system systems that provide it. What we have now is a joke compared to what is possible.
I’ve read several dozen comments, but I haven’t come across the following stated quite this way:
One option is to politely ask someone if they have headphones and/or to turn it down.
Cont’d from ^: you can often lubricate the situation by giving some “reason” that lets the other person save face. You can be genuine or creative or both. (You might say you just really had a rough day and would appreciate quiet.)
As a point of comparison, think about how many drivers forget to turn on their headlights even after the sun goes down. Some fraction of people screw up in spite of their self-interest.
If you are genuinely afraid of speaking to someone, listen to your gut. Just try to check this against reality… if this happens at 1000X the rate of crime in the area, you might be miscalibrated.
You might consider talking to Mr Blaring McLoud without mentioning your annoyance at first. This might help get you one step closer to asking nicely later.
Some people are genuinely unaware, so erring on the side of kindness is a smart step one. Even when asking nicely without snark or impugning someone’s pride, you might still face rude behavior. I like the phrase “don’t mistake kindness for weakness.” You can walk away and figure out what you want to do next, knowing that you gave the other person a chance.
> The stochastic parrot LLM is driven by nothing but eagerness to please. Fix that, and the parrot falls off its perch.
I see some problems with the above comment. First, using the phrase “stochastic parrot” in a dismissive way reflects a misunderstanding of the original paper [1]. The authors themselves do not weaponize the phrase; the paper was about deployment risks, not capability ceilings. I encourage everyone people who use the phase to go re-read the paper and make sure they can articulate what the paper claims and be able to distinguish that from their usage.
Second, what does the comment mean by “fix that, and the parrot falls off the perch.”? I don't know. I think it would need to be reframed in a concrete direction if we want to discuss it productively. If the commenter can make a claim or prediction of the "If-Then" form, then we'd have some basis for discussion.
Third, regarding "eagerness to please"... this comes from fine-tuning. Even without it (RLHP or similar) LLMs have significant prediction capabilities from pretraining (the base model).
All in all, I can't tell if the comment is making a claim I can't parse and/or one I disagree with.
[1]: "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" (Bender et al., 2021)
The backing method doesn’t matter as long as it works. This is clear from good RAG survey papers, Wikipedia, and (broadly) understanding the ethos of machine learning engineers and researchers: specific implementation details are usually means to an end, not definitional boundaries.
“The equiprobability bias (EB) is a tendency to believe that every process in which randomness is involved corresponds to a fair distribution, with equal probabilities for any possible outcome. The EB is known to affect both children and adults, and to increase with probability education. Because it results in probability errors resistant to pedagogical interventions, it has been described as a deep misconception about randomness: the erroneous belief that randomness implies uniformity. In the present paper, we show that the EB is actually not the result of a conceptual error about the definition of randomness.”
You can also find an ELI5 Reddit thread on this topic where one comment summarizes it as follows:
“People are conflating the number of distinguishable outcomes with the distribution of probability directly.”
> When choosing between convenience and privacy, most people seem to choose convenience
But they wish it would have been convenient to choose privacy.
For many, it may be rational to give away privacy for convenience. But many recognize the current decision space as suboptimal.
Remember smoke-infused restaurants? Opting out meant not going in at all. It was an experience that came home with you. And lingered. It took a tipping point to "flip" the default. [1]
[1]: The Public Demand for Smoking Bans https://econpapers.repec.org/article/kappubcho/v_3a88_3ay_3a... "Because smoking bans shift ownership of scarce resources, they are also hypothesized to transfer income from one party (smokers) to another party (nonsmokers)."
I like the way this article is organized; I learned a lot from it. I can't recall seeing this style of presentation before. To summarize and maybe get you interested:
All the combinations of having-and-not-having these
properties gives us 4 interesting kinds of type:
1. can be used any number of times (no name - the default)
2. can’t be used more than once (affine™)
3. must be used at least once (relevant™ - this one is a decent name)
4. must be used exactly once (linear™)
I don't often dig into type theory, but this article made it fun and interesting!
> I don’t see a world in which our tools (LLMs or otherwise) don’t learn this.
I agree, but maybe for different reasons. Refactoring well is a form of intelligence, and I don't see any upper limit to machine intelligence other than the laws of physics.
> Refactoring is a very mechanistic way of turning bad code into good.
There are some refactoring rules of thumb that can seem mechanistic (by which I mean deterministic based on pretty simple rules), but not all. Neither is refactoring guaranteed to be sufficient to lead to all reasonable definitions of "good software". Sometimes the bar requires breaking compatibility with the previous API / UX. This is why I agree with the sibling comment which draws a distinction between refactoring (changing internal details without changing the outward behavior, typically at a local/granular scale) and reworking (fixing structural problems that go beyond local/incremental improvements).
Claude phrased it this way – "Refactoring operates within a fixed contract. Reworking may change the contract." – which I find to be nice and succinct.
This data breach from DOGE is worse in many ways. DOGE employees / contractors are have fewer scruples and guardrails. This data has been used primarily for Trump-and-Company's advantage. All to the detriment of American values, such as being for democracy and reasonable capitalism while standing against authoritarianism and kleptocracy.
The NSA's bulk metadata collection, while later found to violate FISA and likely unconstitutional, operated under a formal legal architecture: statutory authorization via Section 215 of the PATRIOT Act (from 2006 onward), FISA Court orders renewed approximately every 90 days, and at least nominal congressional oversight — though most members were kept uninformed of the program's scope until 2013.
reply