Basically you want to apply one of N functions to each of the items in your data iterable, so you have a predicate to figure out which function to apply first. The degenerate case is when you have just 2 functions and your predicate returns a boolean (0 or 1).
I like overloading most of the time but prefer specialized syntax for the various conditional idioms. Encapsulating it all in one keyword makes "if" harder to parse.
But the client still isn't happy because if they already knew the label they wouldn't need the classifier!
...
If ChatGPT had "sense" your extra prompt should do nothing. The fact that adding the prompt changes the output should be a clue that nobody should ever trust an LLM anywhere correctness matters.
[edit]
I also tried the original question but followed-up with "is it possible that the doctor is the boy's father?"
ChatGPT said:
Yes, it's possible for the doctor to be the boy's father if there's a scenario where the boy has two fathers, such as being raised by a same-sex couple or having a biological father and a stepfather. The riddle primarily highlights the assumption about gender roles, but there are certainly other family dynamics that could make the statement true.
The main point I was trying to make is that adding the prompt "think carefully" moves the model toward the "riddle" vector space, which means it will draw tokens from there instead of the original space.
And I doubt there are any such hidden capabilities because if there were it would be valuable to OpenAI to surface them (e.g. by adding "think carefully" to the default/system prompt). Since adding "think carefully" changes the output significantly, it's safe to assume this is not part of the default prompt. Perhaps because adding it is not helpful to average queries.
LLMs can't hallucinate. They generate the next most likely token in a sequence. Whether that sequence matches any kind of objective truth is orthogonal to how models work.
I suppose depending on your point of view, LLMs either can't hallucinate, or that's all they can do.
>Whether that sequence matches any kind of objective truth is orthogonal to how models work.
Empirically, this cannot be true. If it were, it would be statistically shocking how often models coincidentally say true things. The training does not perfectly align the model with truth, but 'orthogonal' is off by a minimum of 45 degrees.
It matches the training data. Whether the training data matches truth (and whether it's correctly understood - sarcasm included) is a completely separate thing.
> The training does not perfectly align the model with truth, but 'orthogonal'
I went to school to learn about the world and the overwhelming majority of that learning was from professors and textbooks. Whether the professors' beliefs and the textbooks' contents reflected the true properties of the world was a completely separate thing, entirely outside of my control. But I did come away with a better understanding of the world and few would say that education is orthogonal to that goal.
If you add two vectors that don't have a truth component (ie. are orthogonal to the truth), the resulting vector should be no closer to the truth. If you start with random weights and perform some operation on them such that the new weights have a higher likelihood of producing true statements, the operation must not have been orthogonal to the truth. Am I wrong there?
> But I did come away with a better understanding of the world and few would say that education is orthogonal to that goal.
That's due to the reward function / environment. But even outside extremes like North Korea, lots of education environments value conformity over independent analysis.
Certainly an AI trained on North Korean data would emerge with some very suspect beliefs regarding Kim Jong-Un. My point is just that aligning something with training data is aligning it with truth, to the degree that the training data is true and regardless of why it is true. educate(me, truth) can hardly be called orthogonal to the truth, even if the 'educate' and 'me' terms do nothing to prevent educate(me, falsehood).
Whenever someone takes issue with using the word “hallucinate” with LLMs I get the impression they’re trying to convince me that hallucination is good.
Why do you care so much about this particular issue? And why can’t hallucination be something we can aim to improve?
In the sci-fi series The Expanse, there's a Bezos-style rich tycoon character who flaunts his wealth by purposefully not getting hair treatments and instead allows his male pattern baldness to be on display. He's so rich he can afford to not care what anyone thinks, and he wants everyone to know it.
The comment is referring back to "not compensating for anything". Choosing to keep a balding head and not caring what other people think is a power move when having a full head of hair becomes trivial.
Drawing a moral equivalency to some random event in some random contemporary work of fiction as a means of moralizing isn't some kind of awesome megadunk “own” or anything—it's actually pretty lame.
I've heard it's mostly a fallacy. Jevon's paradox is the assertion that demand for energy is so elastic that increases in efficiency increase demand so much that this overwhelms the savings from efficiency. And in most cases this isn't true. One must be careful when looking at historical evidence to not say "efficiency increased, and demand increased, therefore demand increased because of efficiency", which is just a post hoc non sequitur.
I guess your point is that as energy gets cheaper demand will increase therefore negating the savings. Well, that has not been the case when it comes to power. Look at the cost over the last 200 years and you'll see that the cost has dropped over that time. Energy cost is a fraction of what it was then but demand has exploded. There's no reason to think that things will be different in the future.
I used to work with psychology researchers conducting experiments with wearable cameras. Anything involving human subjects needed IRB approval, informed consent, ethics review, etc.
But with essentially any piece of tech you use (not just FB), you check "I agree" on a document you'll never read and give the same data to a private company who will use it however they want. And they charge you for it.
Imagine if I told you a research organization decided to throw out all their ethics and start charging their research subjects to be experimented on, and that this was actually a really solid business model.
This is "agent" from J: https://code.jsoftware.com/wiki/Vocabulary/atdot#agent