It seems like the PFAS rules were set in prior administrations [1]. In fact, even in the article you've linked above, the text states:
> retaining its maximum contaminant levels for PFOA and PFOS but pulling back on its use of a hazard index and regulatory determinations for additional PFAS
Key word being "retaining," indicating the maximum contaminant levels were already in place prior to the change mentioned here. Putting aside allegations of "political bias," can you point to a source which clearly indicates the PFA limits were put in place by the current administration? Would like to learn if I'm wrong.
Trump's first term. February of 2019. Andrew Wheeler's EPA.
You'll also notice that the document lays out planned action dates bleeding generously into Biden's term, and for which Biden later took credit in the document you shared. This is shameful, and sadly normal presidential behavior, taking credit for their predecessor's wins.
If you'd truly like to learn if you're wrong, it's recommended to seek information that disproves your hypothesis rather than proves it. Both this and the previous article I shared were very easy to find and within the first 2 or 3 results.
Trumps first term and his second term are entirely different beasts. His first term, although widely regarded as bad, still had mostly competent people across the board running things. This term is absolute lunacy, with tv show hosts cosplaying as government officials.
> If you'd truly like to learn if you're wrong, it's recommended to seek information that disproves your hypothesis rather than proves it. Both this and the previous article I shared were very easy to find and within the first 2 or 3 results.
Firstly, this is a completely unnecessary comment. My searches were specifically regarding finding the enactment of specific PFA limits. I will acknowledge to not spending that much time looking at it, as you claimed to already have a source and I was curious to see what it was.
But to the point, this document does not outline or set limits on PFAS in drinking water. It's an action plan for measuring and creating limits, but does not itself enforce anything. In fact, every subsequent search I've done has shown that the 2024 Final Rule was the first point at which any limits were put into action.
Quoting directly, the document states that one of the steps being taken is:
> Initiating steps to evaluate the need for a maximum contaminant level (MCL) for perfluorooctanoic acid (PFOA) and perfluorooctane sulfonate (PFOS);
In other words, it outlines a plan for the research that is used to 1) determine if MCL should be set, and 2) what, if any, it should be set to. Notably, it does it not itself set that limit or come to a conclusion about what it should be.
Further, this research appears to be a continuation of research released in 2016 [1], which was the first time that a guideline (but not a mandate) was set. This would, of course, be prior to Trump's first administration. This is suggested in the document itself, where it outlines that this document is part of a series of actions beginning in 2015/2016, as well as callouts to specific research in the 2016 article linked below.
So the facts seem to show that:
1) The first guideline was set in 2016. It was not a law at this time.
2) Research continued to identify next steps for setting a standard, which were codified and shared in the 2019 article you linked
3) The 2024 Final Rule put a MCL into action for PFAS.
Take from that chain of events what you will, but the initial accusations of "political bias" seem unfounded here.
You've lost sight of the comment I responded to, in which the poster asserts, in so many words, that there can be no explanation for easing any restrictions other than for profit and authoritarianism, etc. Right? Clearly there is an explanation if you search for a few minutes. So I stand by my allegations of bias against that comment specifically.
Here, you've read and revised the approach to the issue. This last comment does not warrant any allegation of bias, and I make none about it.
The bigger picture is that both parties are interested in clean drinking water. I guess that's obvious to me, and I'm shocked that's not obvious to everyone. Look how many people on this thread actually believe that the Trump administration is literally trying to poison them. That's not crazy to you? It is to me.
Fair, but I'd like to clarify that in my comment I had asked specifically for any sources indicating that the PFA limits were put in place by the prior administration, since you had made the claim:
> Trump's EPA created these PFAS rules
Your response was what I perceived to be a snarky comment that if only I had bothered to look, I'd have found the evidence, followed by a link that didn't say what was suggested.
> Look how many people on this thread actually believe that the Trump administration is literally trying to poison them. That's not crazy to you?
The claims made all over the place are insane to me. Yes, I doubt the Trump administration is actually trying to kill me. The world is not as polarizing and extreme as people on the internet want to make it sound like it is. Most people are far more docile in the real world, but the collective hive of the internet exacerbates tension. I have no clue what side of the political aisle you're on, but my guess is we probably agree about more things than we disagree about, if we could detach bullshit labels from it all.
But FWIW, the allegation that I wasn't bothering to learn or see if I'm wrong just raised tension further. I was genuinely trying to determine if the claim was true, the evidence I had found suggested it wasn't, and it seems like it in fact wasn't quite true, but perhaps that wasn't the point you were trying to make anyway.
All fine. My hope is that we can all turn down the tension and hostility a level or two. Might be the only hope we have.
The actual rules went into enforcement in the very early part of Biden's term. They have to go back and forth with Congress, and since it involves the military, drafts would be secret or classified.
You can see, per Congress's orders, Trump's EPA would have authored the rules before Biden's term. They were ordered by the PFAS Action Act of 2019.
So while the rules weren't officially enforced during Trump's term, we can deduce that they were drafted and in motion under Andrew Wheeler's EPA, the same one Biden left in office into 2021!
That is to say, Trump's EPA guy wrote them and saw them into enforcement.
That all really is available from the citation I gave. The date is there. You can Google Wheeler's term. You can see when the rules went into effect. You for sure know rules take time to write.
You really didn't think to ask these questions? Honestly, I don't believe you.
You weren't trying to find that you were wrong like I recommended. You were trying to find that the citation by itself could be dead ended to disprove my point if you simply ceased your search.
Think about it for a few minutes and really be honest with yourself. I'm right, not just about you. That's just normal human behavior.
That's okay, but I don't think it's snarky to assume you're also human.
This narrative isn't helpful. Even in this specific case, it's extremely unlikely anyone would have been able to get close enough to him with a knife to kill him without someone noticing.
Guns allow you to kill 1) multiple people, 2) from a distance, and 3) with nobody aware of the imminent threat.
Of course other weapons can also be used to harm people. Of course no solution is perfect. But it's absolutely incorrect to say "the problem isn't so much the tools." The tools undeniably and irrefutably play a role in every study that has ever been conducted on this topic.
See here for the impact of Australia's gun buyback program, which saw zero mass shootings in a decade after their removal, after 13 mass shootings in the 18 years prior the removal, as well as an accelerated decline in firearm deaths and suicides:
https://injuryprevention.bmj.com/content/12/6/365
> it's extremely unlikely anyone would have been able to get close enough to him with a knife to kill him without someone noticing.
What do you mean? If you go to any public place in the world, you can get very close to hundreds of people in a very short time. Knife assassinations happen all the time.
You could have quoted the beginning of the sentence, where the point was about this specific case, and how in this particular case, a gun clearly allowed an assassination that would have been challenging to pull off with a knife.
That is not a way as saying killing someone with a knife is impossible. It's a way of saying that guns allow you to kill people in ways and distances that knives do not.
While true, Australia reclaimed ~650k guns by 1997 and then another ~70k handguns in 2003. By comparison the US is estimated to have around 400M guns, with law enforcement alone having 5M guns (as the “fast and furious” scandal showed, law enforcement guns often end up in the hands of criminals as well).
I don’t know what the answer is for reclaiming the guns, but I think logistically it’ll be hard to implement in the USA even if there wasn’t bad faith attempts to try to thwart regulation (and arguing that there’s still violence with knives and guns aren’t the problem is definitely bad faith/uneducated arguments)
Yeah I'm not suggesting the same process could apply in the US, I'm just trying to aggressively refute the point that guns are not the problem (or, at least, a major component of it). We need to be creative about solutions, but people have to want to find a solution to be creative about them, and right now many do not.
On that we’re 100% agreed. The science is exceedingly clear that guns are the reason for so much gun violence and mass shootings (which makes sense since without guns you couldn’t have either of those by definition).
I'll take a shot at rationale for this perspective, which is similar to a peer comment:
The tech is undoubtedly impressive, and I'm sure has a ton of headroom to grow (although I have no direct knowledge of this, but I'd take you at your word, because I'm sure it's true).
But at least my perception of the idea that this is a "bubble" presently is rooted in the businesses that are created using the technology. Tons of money spent to power AI agents to conduct tasks that would be 99% less expensive to conduct via a simple API call, or because the actual unstructured work is 2 or 3 levels higher in the value chain, and given enough time, there will be new vertically integrated companies that use AI to solve the problem at the root and eliminate the need for entire categories of companies at the level below.
In other words: the root of the bubble (to me) is not that the value will never be realized, but that many (if not most) of this crop of companies, given the amount of time the workflows and technology have had to take hold in organizations, will almost certainly not be able to survive long enough to be the ones to realize it.
This also seems to be why folks draw comparison to the dot com bubble, because it was quite similar. The tech was undoubtedly world changing. But the world needed time to adapt, and most of those companies no longer exist, even though many of the problems were solved a decade later by a new startup who achieved incredible scale.
To be fair, if those people are right, then NVIDIA's stock price (and revenue) is part of that bubble, so its not really evidence that this isn't a bubble.
Time will tell if they're right or not. But it wouldn't be the first time it has happened.
Both things can be true. The tech can be transformative, and the current valuations and burn rate can be wholly unsustainable in the short term. This is exactly what happened in the dotcom era.
Whether or not this one is the same is impossible to know until after it happens. But there are credible arguments to both sides.
Even if we accept as a premise that these models are doing "smart retrieval" and not "reasoning" (neither of which are being defined here, nor do I think we can tell from this tweet even if they were), it doesn't really change the impact.
There are many industries for which the vast majority of work done is closer to what I think you mean by "smart retrieval" than what I think you mean by "reasoning." Adult primary care and pediatrics, finance, law, veterinary medicine, software engineering, etc. At least half, if not upwards of 80% of the work in each of these fields is effectively pattern matching to a known set of protocols. They absolutely deal in novel problems as well, but it's not the majority of their work.
Philosophically it might be interesting to ask what "reasoning" means, and how we can assess if the LLMs are doing it. But, practically, the impacts to society will be felt even if all they are doing is retrieval.
> There are many industries for which the vast majority of work done is closer to what I think you mean by "smart retrieval" than what I think you mean by "reasoning." Adult primary care and pediatrics, finance, law, veterinary medicine, software engineering, etc. At least half, if not upwards of 80% of the work in each of these fields is effectively pattern matching to a known set of protocols. They absolutely deal in novel problems as well, but it's not the majority of their work.
I wholeheartedly agree with that.
I'm in fact pretty bullish on LLMs, as tools with near infinite industrial use cases, but I really dislike the “AGI soon” narrative (which sets expectations way too high).
IMHO the biggest issue with LLMs isn't that they aren't good enough at solving math problem, but that there's no easy way to add information to a model after its training, which is a significant problem for a “smart information retrieval” system. RAG is used as a hack around this issue, but its performance can vary a ton with tasks. LORAs are another options, but they require significant work to make a dataset, and you can only cross your fingers the model keeps its abilities.
It is absolutely possible for the unit economics of a product to be profitable and for the parent company to be losing money. In fact, it's extremely common when the company is bullish on their own future and thus they invest heavily in marketing and R&D to continue their growth. This is what I understood GP to mean.
Whether it's true for any of the mainstream LLM companies or not is anyone's guess, since their financials are either private or don't separate out LLM inference as a line item.
This is an interesting take. By this perspective, it's essentially impossible to ever gauge the efficacy of AI in doing anything, because the people who will know how to measure the quality of that thing are also the people who will be displaced by showing the AI can do that thing. In fact, you could probably argue that every study ever is worthless, because studies are generally performed by people who know the subject matter and it's basically impossible to be unbiased on a topic if you're also highly knowledgable about said topic.
In reality, what matters is the methodology of the study. If the study's methodology is sound, and its results can be reproduced by others, then it is generally considered to be a good study. That's the whole reason we publish methodologies and results: so others can critique and verify. If you think this study is bad, explain why. The whole document is there for you to review.
I think you are correct, and incorrect. However: set and setting. Another of Lanier's observations, which he relates to LLMs, is the Boeing "smart" stall preventer which crashed two <strike>Dreamliners</strike> [correction:] 737 MAXes.
Who can argue with a stall preventer, right? What one can, and has been exposed / argued with, is the observation that information about the operation of the stall preventer, training, and even the ability to effectively control it depended on how much the airline was willing to pay for this necessary feature.
So in reality, what matters is studying the methodology of set and setting, not how the pieces of the crashed airship ended up where they did.
I'm not exactly sure how this relates to my comment above. An analysis of an airline crash and a study are not the same thing.
As it relates to study design, controlling for set and setting are part of the methodology. For example, most drug studies are double-blinded so that neither patients nor clinicians are aware of whether the patient is getting the drug or not, to reduce or eliminate any placebo effect (i.e. to control for the "set"/mental state of those involved in the study).
There are certainly some cases in which it's effectively impossible to control for these factors (i.e. psychedelics). That's not what's really being discussed here, though.
An airline crash is an n of 1 incident, and not the same as a designed study.
> it's essentially impossible to ever gauge the efficacy of AI in doing anything...
... compared to humans? Yes. This is a philosophical conundrum which you tie yourself up in if you choose to postulate the artificial intelligence as equivalent to, rather than a simulacrum of, human intelligence. We fly (planes): are we "smarter" than birds? We breathe underwater: are we "smarter" than fish? And so on.
How do you discern that the "other" has an internal representation and dialogue? Oh. Because a human programmed it to be so. But how do you know that another human has internal representation and dialogue? I do (I have conscious control over the verbal dialogue but that's another matter), so I choose to believe that others (humans) do (not the verbal part so much unfortunately). I could extend that to machines, but why? I need a better reason than "because". I'd rather extend the courtesy to a bird or a fish first.
This is an epistemological / religious question: a matter of faith. There are many things which we can't really know / rigorously define against objective criteria.
This, similar to your other comment, is unrelated to my comment.
This is about determining if AI can be a equivalent or better (defined as: achieving equal or better clinical outcomes) therapist than a human. That is a question that can be studied and answered.
Whether artificial intelligence accurately models human intelligence, or whether an airplane is "smarter" than a bird, are entirely separate questions that can perhaps serve to explain _why/how_ the AI can (or can't) achieve better results than the thing we're comparing against, but not whether it does or does not. Those questions are perhaps unanswerable based on today's knowledge. But they're not prerequisites.
Related similar thing when I sent my dog's recent bloodwork to an LLM, including dates, tests, and values. The model suggested that an advancement in her kidney values (all still within normal range) were likely evidence of chronic kidney disease in its early stage. Naturally this caused some concern for my wife.
But, I work in healthcare and have enough knowledge of health to know that CKD almost certainly could not advance fast enough to be the cause of the kidney value changes in the labs that were only 6 weeks apart. I asked the LLM if that's the best explanation for these values given they're only 6 weeks apart, and it adjusted its answer to say CKD is likely not the explanation as progression would happen typically over 6+ months to a year at this stage, and more likely explanations were nephrotoxins (recent NSAID use), temporary dehydration, or recent infection.
We then spoke to our vet who confirmed that CKD would be unlikely to explain a shift in values like this between two tests that were just 6 weeks apart.
That would almost certainly throw off someone with less knowledge about this, however. If the tests were 4-6 months apart, CKD could explain the change. It's not an implausible explanation, but it skipped over a critical piece of information (the time between tests) before originally coming to that answer.
The internet, and now LLMs have always been bad at diagnosing medical problems. I think it comes from the data source. For instance, few articles would be linked to / popular if a given set of symptoms were just associated with not getting enough sleep. No, the articles stand out are the ones where the symptoms are associated with some rare / horrible condition. This is our LLM training data which are often missing the entire middle part of the bell curve.
For what it's worth this statement is actually not entirely correct anymore. Top-end models today are on par with diagnostic capabilities of physicians on average (across many specialties), and, in some cases, can outperform them when RAG'd in with vetted clinical guidelines (like NIH data, UpToDate, etc)
However, they do have particular types of failure modes that they're more prone to, and this is one of them. So they're imperfect.
This is ChatGPT's self assessment. Perhaps you mean a specialized agent with RAG + evals however.
ChatGPT is not reliable for medical diagnosis.
While it can summarize symptoms, explain conditions, or clarify test results using public medical knowledge, it:
• Is not a doctor and lacks clinical judgment
• May miss serious red flags or hallucinate diagnoses
• Doesn’t have access to your medical history, labs, or physical exams
• Can’t ask follow-up questions like a real doctor would
Sorry, I should have clarified, but no this is not ChatGPT's self assessment.
I am suggesting that today's best in class models (Gemini 2.5 Pro and o3, for example), when given the same context that a physician has access to (labs, prior notes, medication history, diagnosis history, etc), and given an appropriate eval loop, can achieve similar diagnostic accuracy.
I am not suggesting that patients turn to ChatGPT for medical diagnosis, or that these tools are made available to patients to self diagnose, or that physicians can or should be replaced by an LLM.
But there absolutely is a role for an LLM to play in diagnostic workflows to support physicians and care teams.
> retaining its maximum contaminant levels for PFOA and PFOS but pulling back on its use of a hazard index and regulatory determinations for additional PFAS
Key word being "retaining," indicating the maximum contaminant levels were already in place prior to the change mentioned here. Putting aside allegations of "political bias," can you point to a source which clearly indicates the PFA limits were put in place by the current administration? Would like to learn if I'm wrong.
[1]: https://www.epa.gov/newsreleases/biden-harris-administration...