Pure functional programming and lazy evaluation.. sure, you could create classes and a meta-function that selectively eval's thunks at a time, but the call site of that kind of library would look atrocious..
You might be able to hack on some of the datatype semantics into JS prototype-based inheritance (I'd rather start with TypeScript at that point, but then we're back at the "why isn't it a library" debate) to keep those ontologies from being semantically separate, but that's an uphill battle with some of JS's implicit value conversions.
I consider Logic Programming languages to be the go-to counterargument to TFA but yeah, anything with lazy eval and a mature type system are strong counterexamples too.
Almost made it into 1.18 but looks like it doesn't add enough value and has some open questions like what to use for a backing data type and what complexity promises to make.
It's not a yes/no per contestent, it's per edge between contestants. There are n(n-1)/2 of these.
A true answer for a potential match is actually a state update for all of the (n-1) edges connecting either contestant, that's 2(n-2) edges that can be updated to be false. Some of these may already be known from previous rounds' matchups but that's still more than a single binary.
An answer of "yes" will generally eliminate many edges, with potential for >1 bit. However, an answer of "no" will generally eliminate just that one edge, which is generally <1 bit.
But you don't receive more than a single binary value; you get a yes or no.
If both of these are equally likely, you gain one bit of information, the maximum possible amount. If you already have other information about the situation, you might gain _less_ than one bit on average (because it confirms something you already knew, but doesn't provide any new information), but you can't gain more.
The claim was that one bit was the maximum amount of information you could gain, which is clearly false.
Just to make this unambiguous: If you ask me to guess a number between one and one billion, and by fantastic luck I guess right, your “yes/no” answer obviously gives me more than one bit of information as to the right answer.
> The claim was that one bit was the maximum amount of information you could gain, which is clearly false.
That's not what I see.
https://news.ycombinator.com/item?id=46282007They have an example that calculates the expected information gained by truth booths and all of the top ones are giving more than one bit. How can this be? It is a yes/no question a max of 1 bit should be possible
https://news.ycombinator.com/item?id=46282343the expected information (which is the sum of the outcome probability times the outcome information for each of the two possible outcomes) is always less than or equal to one.
The specific comment you replied to had one sentence that didn't say "expected" or "average", but the surrounding sentences and comments give context. The part you objected to was also trying to talk about averages, which makes it not false.
If both of these are equally likely, you gain one bit of information, the maximum possible amount. If you already have other information about the situation, you might gain _less_ than one bit on average (because it confirms something you already knew, but doesn't provide any new information), but you can't gain more.
Can’t gain more!
The core confusion is this idea that the answer to a yes/no question can’t provide more than one bit of information, no matter what the question or answer. This is false. The question itself can encode multiple bits of potential information and the answer simply verifies them.
I’m not arguing with that, it’s basic information theory.
One bit, however, is not “the maximum possible amount” you can gain from an oracular answer to a yes/no question. The OP covers exactly this point re: the “Guess Who?” game.
I think that LLMs will be complemented best with a declarative language, as inserting new conditions/effects in them can be done without modifying much (if any!) of the existing code. Especially if the declarative language is a logic and/or constraint-based language.
We're still in early days with LLMs! I don't think we're anywhere near the global optimum yet.
You can go one step further than that and calculate a fairness measure using something like the Gini coefficient (*) and analyze how much it has changed over time.
The source is electrical noise, but the solution of isolating the audio chain from the computer's USB means that in the future you might not notice when you've introduced another GPU memory bandwidth hog into your rendering loop.
reply