Hacker News new | past | comments | ask | show | jobs | submit | hoosieree's comments login

They missed one:

    def agent(data, predicate, fns):
      ps = map(predicate, data)
      fs = map(lambda x:fns[x], ps)
      return map(fs, data)
Basically you want to apply one of N functions to each of the items in your data iterable, so you have a predicate to figure out which function to apply first. The degenerate case is when you have just 2 functions and your predicate returns a boolean (0 or 1).

This is "agent" from J: https://code.jsoftware.com/wiki/Vocabulary/atdot#agent


I like overloading most of the time but prefer specialized syntax for the various conditional idioms. Encapsulating it all in one keyword makes "if" harder to parse.


My classifier is not very accurate:

    is_trick(question)  # 50% accurate
To make the client happy, I improved it:

    is_trick(question, label)  # 100% accurate
But the client still isn't happy because if they already knew the label they wouldn't need the classifier!

...

If ChatGPT had "sense" your extra prompt should do nothing. The fact that adding the prompt changes the output should be a clue that nobody should ever trust an LLM anywhere correctness matters.

[edit]

I also tried the original question but followed-up with "is it possible that the doctor is the boy's father?"

ChatGPT said:

Yes, it's possible for the doctor to be the boy's father if there's a scenario where the boy has two fathers, such as being raised by a same-sex couple or having a biological father and a stepfather. The riddle primarily highlights the assumption about gender roles, but there are certainly other family dynamics that could make the statement true.


It's not like GP gave task-specific advice in their example. They just said "think carefully about this".

If it's all it takes, then maybe the problem isn't a lack of capabilities but a tendency to not surface them.


The main point I was trying to make is that adding the prompt "think carefully" moves the model toward the "riddle" vector space, which means it will draw tokens from there instead of the original space.

And I doubt there are any such hidden capabilities because if there were it would be valuable to OpenAI to surface them (e.g. by adding "think carefully" to the default/system prompt). Since adding "think carefully" changes the output significantly, it's safe to assume this is not part of the default prompt. Perhaps because adding it is not helpful to average queries.


LLMs can't hallucinate. They generate the next most likely token in a sequence. Whether that sequence matches any kind of objective truth is orthogonal to how models work.

I suppose depending on your point of view, LLMs either can't hallucinate, or that's all they can do.


>Whether that sequence matches any kind of objective truth is orthogonal to how models work.

Empirically, this cannot be true. If it were, it would be statistically shocking how often models coincidentally say true things. The training does not perfectly align the model with truth, but 'orthogonal' is off by a minimum of 45 degrees.


It matches the training data. Whether the training data matches truth (and whether it's correctly understood - sarcasm included) is a completely separate thing.

> The training does not perfectly align the model with truth, but 'orthogonal'

Nitpicky, but the more dimensions you have, the easier it is for almost everything to be orthogonal. (https://softwaredoug.com/blog/2022/12/26/surpries-at-hi-dime...) That's why averaging embeddings works.


I went to school to learn about the world and the overwhelming majority of that learning was from professors and textbooks. Whether the professors' beliefs and the textbooks' contents reflected the true properties of the world was a completely separate thing, entirely outside of my control. But I did come away with a better understanding of the world and few would say that education is orthogonal to that goal.

If you add two vectors that don't have a truth component (ie. are orthogonal to the truth), the resulting vector should be no closer to the truth. If you start with random weights and perform some operation on them such that the new weights have a higher likelihood of producing true statements, the operation must not have been orthogonal to the truth. Am I wrong there?


> But I did come away with a better understanding of the world and few would say that education is orthogonal to that goal.

That's due to the reward function / environment. But even outside extremes like North Korea, lots of education environments value conformity over independent analysis.


Certainly an AI trained on North Korean data would emerge with some very suspect beliefs regarding Kim Jong-Un. My point is just that aligning something with training data is aligning it with truth, to the degree that the training data is true and regardless of why it is true. educate(me, truth) can hardly be called orthogonal to the truth, even if the 'educate' and 'me' terms do nothing to prevent educate(me, falsehood).


Isn't this the same thing that happens when you train a human on truths vs falsehoods?


Whenever someone takes issue with using the word “hallucinate” with LLMs I get the impression they’re trying to convince me that hallucination is good.

Why do you care so much about this particular issue? And why can’t hallucination be something we can aim to improve?


Video autotune.


Apparently they can haul 50+ sheets of 4x8 plywood with the door closed (and without removing the seats).


Whoa. That's ~3000 pounds.


If it's 50 sheets of 1/8" balsa wood, it's about 166 lbs.

If it's 50 sheets of 1.25" lignum vitae, it's about 13,100 lbs.

3000 lbs seems like a reasonable estimate.


Is it?

How heavy is a sheet of unspecified plywood?


1 1/8” Plywood - 85 lbs 3/4” Plywood - 61 lbs 5/8” Plywood - 50 lbs 1/2” Plywood - 41 lbs 3/8” Plywood - 36 lbs 1/4” Plywood - 22 lbs


about 50


In the sci-fi series The Expanse, there's a Bezos-style rich tycoon character who flaunts his wealth by purposefully not getting hair treatments and instead allows his male pattern baldness to be on display. He's so rich he can afford to not care what anyone thinks, and he wants everyone to know it.


Interesting anecdote about a work of fiction, but what does it have to do with anything?


The comment is referring back to "not compensating for anything". Choosing to keep a balding head and not caring what other people think is a power move when having a full head of hair becomes trivial.


It's kind of entertaining that both are vanity.


It seems pretty clearly related to the post it is a reply to.


Drawing a moral equivalency to some random event in some random contemporary work of fiction as a means of moralizing isn't some kind of awesome megadunk “own” or anything—it's actually pretty lame.


Why do you think it was attempting to be anything of the sort?


Guy 1 jokes about how he's so confident in himself he drives a minivan.

Person 2 says "ha that reminds me of a character in a story".

It's two people joking in a subthread.


Heard of the Jevons paradox?


I've heard it's mostly a fallacy. Jevon's paradox is the assertion that demand for energy is so elastic that increases in efficiency increase demand so much that this overwhelms the savings from efficiency. And in most cases this isn't true. One must be careful when looking at historical evidence to not say "efficiency increased, and demand increased, therefore demand increased because of efficiency", which is just a post hoc non sequitur.


I guess your point is that as energy gets cheaper demand will increase therefore negating the savings. Well, that has not been the case when it comes to power. Look at the cost over the last 200 years and you'll see that the cost has dropped over that time. Energy cost is a fraction of what it was then but demand has exploded. There's no reason to think that things will be different in the future.


I used to work with psychology researchers conducting experiments with wearable cameras. Anything involving human subjects needed IRB approval, informed consent, ethics review, etc.

But with essentially any piece of tech you use (not just FB), you check "I agree" on a document you'll never read and give the same data to a private company who will use it however they want. And they charge you for it.

Imagine if I told you a research organization decided to throw out all their ethics and start charging their research subjects to be experimented on, and that this was actually a really solid business model.


"Septic Tanks Pumped. Swimming Pools Filled. Not Same Truck."


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: