Imagine you were to pause and take an exact carbon copy of the exact physical state of the universe in the current moment. When you press play in both universes, both universes will play out the exact same way into the end of time, and you would never be able to tell the two apart. That is determinism.
If the universe were not deterministic, for reasons like non-deterministic physics, souls existing, etc. you would potentially see difference between the two as they played out separately.
Another way to think about determinism vs non-determinism is that if something is deterministic, then if you have perfect information about it, you can predict it’s future states exactly. On the flip side, if something is non deterministic, no matter how much information you gather about it, you will never be able to exactly predict its future states (only probabilistically).
Einstein is not so subtly telling us “god doesn’t play dice”, that there are no random physical properties of the universe, and that magic (like souls) doesn’t exist, in his opinion.
Free will means we have a choice in what we do next, but if we go and examine your carbon copy in the other universe you will see they did all of the same things you did. Furthermore if we were granted perfect information about the universe and had a good enough gpu, we could from the physical principles of the universe predict the future state of everything in the universe until the end of time, including all of your future choices, hence no free will.
My thoughts are that people are mixing up the human concept of free will — freedom to make decisions based on how you feel and what you know - with determinism, that the first state of all matter in the universe determined all future states of the universe, and that these are inevitable. The fallacy is that free will in the sense of determinism is only important to human decision making if we have perfect information about the current state of the universe, which we don’t - so until we do, we have to keep guessing what happens next.
Sqlalchemy stands out as a library having probably one of the most complete and pragmatic APIs for database access across all languages.
It is no small feat to create compatibility for modern Python features like type hints and async in a library that has its roots in Python 2, it has absolutely exceeded expectations in that regard.
Sqlalchemy in general is great but the data class integration feels non pythonic to me, due perhaps to catering first to the typing crowd instead of the ergonomic one.
I felt that too but over time decided that it compromised the theoretical pythonicity for the practical compromise of being flexible enough to work properly with SQL.
One other reason for its popularity and success is how engaged the orginal developer is with the overall community.
What they need is a proper migration diff and generation tool with strong defaults. Alembic is meh and the DX is poor. Prisma and Django's migration/diff tools are the gold standard.
Prisma defeats the entire point of sql with it's weird almost graphql like thing. Apparently developers can't be trusted with it cause you can do big dumb in it.
I seriously don’t understand how searching for a file in windows takes so long and yields such crappy results? What abomination must there be under the hood for it to be this consistently bad for all of these years? Microsoft devs chime in if you have any insight.
Somehow i don't even think it is enshittification, because their search has been bad forever. On all previous versions of windows server even.
ok ok, maybe it would slow things down to index shared drives? well how do you fuck up simple search on the LOCAL computer too???? I have to use powershell to do searching "gci -recurse" is built in alias for get-childitem. And it wasn't too many more lines of code to start searching the contents of word and excel files. (although this does take a lot longer, at least it works)
I do - it just makes sense if you are riding public transport or touching stuff that isn’t as clean as you’d think. Like your phone, wallet, keys, backpack, shoe laces, pant pockets, etc. Not obsessively, but if I haven’t washed my hands in a few hours or feel they are dirty I wash them before eating.
Thanks for asking! Yes, but we're actively addressing them.
We do a few things under the hood to make hallucinations significantly less likely. First, we make sure every single statement made by the LLM has a fact ID associated with it... Then we've fine-tuned "verification" LLMs that review all statements to make sure that assertions being made are backed up by facts, and that the facts are actually aligned with the assertion.
It's still possible for the LLM to hallucinate in this process, but the likelihood is much lower.
Here’s a simple rule, based on the fact no one has shown that an llm or a compound llm system can produce an output that doesn’t need to be verified for correctness by a human across any input:
The rate at which llm/llm compound systems can produce output > the rate at which humans can verify the output
I think it follows that we should not use llms for anything critical.
The gunghoe adoption and hamfisting of llms into critical processes, like an AWs migration to Java 17, or root cause analysis is plainly premature, naive, and dangerous.
This is a highly relevant and accurate point. Let me explain how this happens in real life instead of breathless C-type hucksterism:
We have a project working on very large code-base in .NET Web Forms (and other old tech) that needs be updated to more modern tech so it can be in .NET 8 and run on linux to save hosting costs. I realize this is more complicated that just convert to later versions of Java, but it's roughly the same idea. The original estimate was for 5 devs for 5 years. C-types decide it's time to use LLMs to help this get done. We use both Co-Pilot and later others, Claude of which turned out to be the most useful. Senior devs create processes that offshore teams start using to convert code. Target tech can be varied based on updated requirements, so some went to Razor pages, some to JS with .NET API, some other stuff. Looks to be pretty good modernization at the start.
Then the Senior devs start trying to vet the changes. This turns out to be a monumental undertaking. Literally swamped code reviewing output from the offshore teams. Many, many subtle bugs were introduced. It was noted that the bugs were from the LLMs, not the offshore team.
A very real fatigue sets in among senior devs where all they're doing is vetting machine generate code. I can't tell you how mind numbing this becomes. You start to use the LLMs to help review, which seems good but really compounds the problem.
Due to the time this is taking, some parts of the code start to be vetted by just the offshore team, and only the "important things" get reviewed by Senior devs.
This works fine for exactly 5 weeks after the first live deploy. At that point the live system experiences a major meltdown and causes an outage affecting a large number of customers. All hands on deck, trying to find the problem. Days go by, system limps along on restarts and patches, until the actual primary culprit is found, which turns out to be a == for some reason being turned into a != in a particular gnarly set of boolean logic. There were other problems as well, but that particular one wreaked the most havoc.
Now they're back to formal, very careful code reviews, and I moved onto a different project on threat of leaving. If this is the future of programming, it's going to be a royal slog.
> Here’s a simple rule, based on the fact no one has shown that an llm or a compound llm system can produce an output that doesn’t need to be verified for correctness by a human across any input:
I’m still not sure why some of us are so convinced there isn’t an answer to properly verifying LLM output. In so many circumstances, having output pushed 90-95% of the way is very easily pushed to 100% by topping off with a deterministic system.
Do I depend on an LLM to perform 8 digit multiplication? Absolutely not, because like you say, I can’t verify the correctness that would drive the statistics of whatever answer it spits out. But why can’t I ask an LLM to write the python code to perform the same calculation and read me its output?
> I think it follows that we should not use llms for anything critical.
While we are at it I think we should also institute an IQ threshold for employees to contribute to or operate around critical systems. If we can’t be sure to an absolute degree that they will not make a mistake, then there is no purpose to using them. All of their work will simply need to be double checked and verified anyway.
There isn’t one answer to how to do it. If you have an answer to validation for your specific use case, go for it. this is not trivial because most flashy things people want to use llms for like code generation and automated RCA’s are hard or impossible to verify without the I Need A More Intelligent Model problem.
2. I believe this is falsely equating what llms do with human intelligence. There is a skill threshhold for interacting with critical systems, for humans it comes down to “will they screw this up?” And the human can do it because humans are generally intelligent. The human can make good decisions to predict and handle potential failure modes because of this.
Also, let’s remember the most important thing about replacing humans with AI - a human is accountable for what they do.
That is, ignoring all the other myriad, multidimensional other nuances of human/social interactions that allow you to trust a person (and which are non-existent when you interact with an AI).
Why not automate verification itself then? While not possible now, and I would probably never advocate for using LLMs in critical settings, it might be possible to build field-specific verification systems for LLMs with robustness guarantees as well.
If the verification systems for LLMs are built out of LLMs, you haven't addressed the problem at all, just hand-waved a homunculus that itself requires verification.
If the verification systems for LLMs are not built out of LLMs and they're somehow more robust than LLMs at human-language problem solving and analysis, then you should be using the technology the verification system uses instead of LLMs in the first place!
> If the verification systems for LLMs are not built out of LLMs and they're somehow more robust than LLMs at human-language problem solving and analysis, then you should be using the technology the verification system uses instead of LLMs in the first place!
The issue is not in the verification system, but in putting quantifiable bounds on your answer set. If I ask an LLM to multiply large numbers together I can also very easily verify the generated answer by topping it with a deterministic function.
I.e. rather than hoping that an LLM can accurately multiply two 10 digit numbers, I have a much easier (and verified) solution by instead asking it to perform this calculation using python and reading me the output
Spitballing, if you had a digital model of a commercial airplane, you could have an llm write all of the component code for the flight system, then iteratively test the digital model under all possible real world circumstances.
I think automating verification generally might require general intelligence, not an expert though.
The same is true of computers, in fact it has been mathematically proven that it is impossible to answer the general question if a computer program is correct.
But that hasn't stopped the last 40 years from happening because computers made fewer mistakes than the next best alternative. The same needs to be true of LLMs.
That is not true at all. You do not need to generate a spec. All you need to do is prove a property. This can be done in many ways.
For example, many things can be proven about the following program without having to solve any general problem at all:
echo “hello world”
Similarly for quick sort, merge sort, and all sort of things. The degree of formality doesn’t have to go to formal methods which are only a very small part of the whole field
What you’re saying is equivalent to throwing out all of mathematics due to the incompleteness theorem and start praying to fried egg jellyfish on full moon
We’ve really screwed the pooch on this one. How many cancers, chronic diseases, birth defects, are a result of our mass pollution of this once pristine oasis of life, the only one we have?
If microplastics are directly causing illnesses and birth defects then we would've found out already. Past cases of mass illness caused by pollutants (lead gasoline, asbestos, minamata disease, thalidomide, chimney sweep's carcinoma, etc.) were uncovered quickly and usually addressed not long after. The fact we still can't pinpoint exactly how microplastics are harming us beyond that they are in places where they are not supposed to be, The one reassuring thing about this whole ordeal is plastics are largely inert, that's why they take forever to degrade.
I didn't go and look up the others but this argument by similarity, at least when applied to asbestos says the exact opposite to the claim you're trying to make. It's generally considered that industry/government was aware of the issues relating to asbestos in the '30s (and _started_ doing things about it then) but it wasn't until the 70's/80's (depending on the country) that its use was mostly stopped (and in places like Australia it wasn't outright banned until 2003).
And? That span is within an individual's lifetime, which is not very long in the context of human history. As of now there's zero sign any entity with regulatory power is doing anything about microplastics.
Also, why are you trying to deliver a point without looking up most of the examples I've listed? Do you expect that to be a convincing argument?
Each of us only gets one life. If most of that time is spent being unnecessarily exposed to pollutants with adverse health effects, then we have good reason to be outraged.
Consider also that some pollutants accumulate in the food chain, impacting future generations, for an indefinite time. Such as mercury in seafood.
Good news is tuna is not the only form of seafood you can eat! Perhaps you can try channeling that outrage into real tangible activism, any minute now.
Do you think the problem will go away over time? Are you OK with having fewer things that you can eat without poisoning yourself? Do you realize we only have so many reservoirs from which calories can come from?
Why do you accuse me of being against addressing these issues? I am merely pointing out these are not existential crises because there is currently no evidence supporting such claims. It may change in the future, but until then I'm not jumping on this ship.
Ultraprocessed foods are already poison and most of us, probably including you, are consuming them regularly. Are you OK with that? And what is this talk about caloric reservoirs? Humans 8000 years ago had far fewer options and they still survived long enough to pass cultural lineages down to us. Most of your "caloric reservoirs" did not exist before the 20th century because the science and industry that created them did not exist.
Compared with the other examples you gave, I think one of the differences with micro- and nanoplastic and their growing bioaccumulation, is that if/when we discover that some level of concentration of it causes noticeable issues, it will be very hard to reverse, and it will be globally abundant (i.e. throughout the entire food chain). We'll be stuck with the problem for a very very long time.
It's not like we'll be able to just outlaw it and be done with the issue after a few years. So for this specific polluant, it feels right that we should be cautious and look for solutions as quickly as we can.
Humans 8000 years ago got their food from an environment that wasn't contaminated with ubiquitous, unnaturally occurring, forever chemicals.
Ultra processed foods are a problem and likely contributing to the plastic problem. I do what I can to reduce my reliance on both. Yet the solution won't be the few educated among us stopping ourselves. It must be regulated collectively or we'll remain in a prisoners dilemma as the pollutants accumulate.
> And? That span is within an individual's lifetime, which is not very long in the context of human history. As of now there's zero sign any entity with regulatory power is doing anything about microplastics.
Primarily, I _am_interested with health outcomes within my and my children's lifespan so that's the sort of time span I'm primarily concerned about. If the comparison to asbestos hold's true then we still have a _long_ time (long enough that any potential deleterious effects will be felt by all currently living and soon to be living members of my family) before any sort of regulatory action will be taken regardless of the health impacts.
> Also, why are you trying to deliver a point without looking up most of the examples I've listed? Do you expect that to be a convincing argument?
Because I'm _not_ your fact-checker. You're other examples may well follow a much quicker time-frame between discovery and strong regulatory action; I don't really care one way or the other, since at least one example shows a course of history which would play out poorly for those of us alive _now_ and exposed to increasing levels of environmental plastics.
This comment won't age well if we keep polluting the environment the way we are. Plastics will continue to grow as a problem if we keep using them at the scale we do.
The world retarded backwards socially in the last decade, standard of living declined, lowered testosterone levels, lower sperm counts, lower fertility rates, and perhaps lower IQ.
I suspected mass induced psychosis around Covid times. Now I suspect all the plastic people are consuming.
> The extent to which microplastics cause harm or toxicity is unclear, although recent studies associated MNP [micro- and nanoplastic] presence in carotid atheromas with increased inflammation and risk of future adverse cardiovascular events²⁻³. In controlled exposure studies, MNPs clearly enhance or drive toxic outcomes⁴⁻⁶. The mantra of the field of toxicology – “dose makes the poison” (Paracelsus) – renders such discoveries as easily anticipated; what is not clearly understood is the internal dose in humans.
I'll take a stab at it, but I'm not a doctor or a biologist.
Cyanide will kill you. Almonds have trace amounts of cyanide. You can eat almonds till you get sick, but that tiny amount of cyanide won't kill you. The dose makes the poison. On the other hand, those fancy car-fentanyl, if a flake touches your skin, you'll od and die. It's super super toxic.
One flake of microplastics in your body isn't going to do anything.
Now,
> although recent studies associated MNP [micro- and nanoplastic] presence in carotid atheromas with increased inflammation and risk of future adverse cardiovascular events
It's not like every muscle fires on every heartbeat. They try, but some are old or dead. There are a lot of them, all working together. And they're constantly repairing or regrowing new muscle cells. How does the heart shed microplastic? Can it? How much till the immune system kicks in and starts causing inflammation? Inflammation is generally good, but super dangerous in the heart. Too much inflammation, and well, it stops firing correctly.
> what is not clearly understood is the internal dose in humans.
How much microplastic is too much? nobody knows. Seems like, you get too much, and repair systems start kicking in. The repair systems can't actually fix anything, but make you weak/sick, and you're generally worse off. When some other thing shocks the system, your body is already in panic mode, so the "normal" response of panicking doesn't change anything. Odds are, you just die.
In plain English: There exists a dosage that causes problems, we don't know what it is. If this trend continues, we will most likely find out what it is the painful way.
While it's still debated, it seems likely that the BPAs in plastic will be classified as mildly carcinogenic at some point. It's probably not a big concern to be touching your phone case but maybe more so if it's millions of particles permanently inside you.
Indeed. Which is why we need collective action to overcome the power corporations have over an uneducated and apathetic population. A population they oversee through regulatory capture, advertising, curated studies, and (increasingly legalized) bribery of government officials.
Are the microplastics found in the environment coming from corporations production residues or from people’s consumption residues ?
I would say a relevant part of them comes from individuals. If you believe the issue is important enough, then raising awareness is the right direction so they take healthier choices. However most of the worlds population lives in developing countries, and microplastics are very low in their list of priorities, their top priority is actually making to the end of the month with enough food, shelter and some medical care or even education for them or their kids.
Blaming everything on greedy corporations is a very superficial analysis.
Individuals don’t make the plastic. It may come from post consumer waste, but it is manufactured by the corporations. Individual action is unlikely to make a dent in this issue.
Agreed though I see the current situation as just yet another result of the inherent flaws in human reasoning where some significant percentage of the population can't reason sufficiently well to make reasonably optimal choices (e.g., conservative science denialism).
This is why we, as a population, won't agree to curb CO2 sufficiently to address climate change but instead will simply adjust and, maybe eventually, accept geoengineering based approaches. That said, the next 10 years will be telling in how conservatives react to much higher home owner's insurance premiums...
> When it comes to microplastic inhalation, Mongolia and China came in joint first place, with citizens of both countries inhaling more than 2.8 million microplastic particles a month. The United Kingdom came in third place, in joint place with Ireland, inhaling 791,500 particles per month. By comparison, the U.S. came in near the bottom of this list, in position 104 out of the 109 countries assessed, with only 10,500 microplastic particles inhaled per month.
They're special case because they don't care about environmental protection unless there's a photo-op and headline that the government can use to look good.
Ask your representatives today to ban all goods using China’s supply chain, and start producing all your country’s needs in your own soil.
Then measure the inhalable microplastics stats, that way we can know for sure if your country’s “environmental protection” is viable or just an excuse to offload dirty work to developing countries.
This is an interesting take. Go is great because it balances performance, expressiveness, ease of use, and bug prevention probably better than any other language. I’m very happy Go is the container-world language instead of Rust, rust is just a pain.
Go is great, because its authors happened to have a manager that allowed them to work on their side project to avoid using C++ at Google, and eventually it took off thanks to Docker and Kubernetes pivoting to Go, and their related success in the industry.
Had it not been the case, and it would have been as successful in the industry as Oberon-2 and Limbo, its two main influences, and (simplifying the actual historical facts) previous work from the authors.
If the universe were not deterministic, for reasons like non-deterministic physics, souls existing, etc. you would potentially see difference between the two as they played out separately.
Another way to think about determinism vs non-determinism is that if something is deterministic, then if you have perfect information about it, you can predict it’s future states exactly. On the flip side, if something is non deterministic, no matter how much information you gather about it, you will never be able to exactly predict its future states (only probabilistically).
Einstein is not so subtly telling us “god doesn’t play dice”, that there are no random physical properties of the universe, and that magic (like souls) doesn’t exist, in his opinion.
Free will means we have a choice in what we do next, but if we go and examine your carbon copy in the other universe you will see they did all of the same things you did. Furthermore if we were granted perfect information about the universe and had a good enough gpu, we could from the physical principles of the universe predict the future state of everything in the universe until the end of time, including all of your future choices, hence no free will.
My thoughts are that people are mixing up the human concept of free will — freedom to make decisions based on how you feel and what you know - with determinism, that the first state of all matter in the universe determined all future states of the universe, and that these are inevitable. The fallacy is that free will in the sense of determinism is only important to human decision making if we have perfect information about the current state of the universe, which we don’t - so until we do, we have to keep guessing what happens next.