Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This technology has been a true blessing to me. I have always wished to have a personal PhD in a particular subject whom I could ask endless questions until I grasped the topic. Thanks to recent advancements, I feel like I have my very own personal PhDs in multiple subjects, whom I can bombard with questions all day long. Although I acknowledge that the technology may occasionally produce inaccurate information, the significant benefits it offers in terms of enhancing my knowledge are truly tremendous. I am absolutely thrilled with this technology and its potential to support my learning.

Note: As I'm shy of my writing style, GPT helped me refine the above.



If you don't know the subject, how can you be sure what it's telling you is true? Do you vet what ChatGPT tells you with other sources?

I don't really know Typescript, so I've been using it a lot to supplement my learning, but I find it really hard to accept any of its answers that aren't straight code examples I can test.


> Do you vet what ChatGPT tells you with other sources?

I find that ChatGPT is good at helping me with "unknown unknown" questions, where I don't know how to properly phrase my question for a search engine, so I explain to ChatGPT in vague terms how I am feeling about a certain thing.

ChatGPT helps me understand what to search for, and then I take it from there by looking for a reputable answer on a search engine.


That's true. I've also used it for these "unknown unknowns" questions with very good results. Basically talking with ChatGPT to find out what should I put on Google, and how we go from there is business as usual.

But other than that it makes me nervous when people say they're "learning with ChatGPT": any serious conversation with ChatGPT about a subject I know about quickly shows just how much nonsense and bullshit it conjures out of thin air. ChatGPT is extremely good at sounding convincing and authoritative, and you'll feel like you're learning a lot, when in fact you could be learning 100% made-up facts and the only way to tell is if you understand the subject already.


Perhaps you underestimate how much average people lack the most basic surface-level knowledge of various subjects, and how much value learning the basics can provide.

Some of these people are just learning about the relationship between temperature and pressure or current and voltage, etc. something well within the bounds of LLMs and its enriching their lives dramatically.

I asked it a question once to clarify a fact from a book I was reading that temporarily baffled my 2am barely awake mind.

“Why is humid air less dense than dry air? Isn’t water heavier than air”

It went on to explain the composition of air, the atomic weights of all the most common air molecules and how the atomic weight of water molecules is lighter than nitrogen (N2) and oxygen (O2)

And my fallacy was in comparing air to liquid water that people are more familiar with rather than water vapor which is what would be found in humid air.


Can you go into more depth about

>I don't really know Typescript, so I've been using it a lot to supplement my learning, but I find it really hard to accept any of its answers that aren't straight code examples I can test.

- How are you using it?

- What are the questions you're asking it?

- What are your thoughts about the answers and how are you cross checking them?

Edit:

>If you don't know the subject, how can you be sure what it's telling you is true? Do you vet what ChatGPT tells you with other sources?

I can't, but i can take a look at books i have or search google to find additional sources.

To me, the biggest power of it is to help me understand and build mental models of something new.


At this point I generally stick to specific small problems like "How can I write a script to convert a Product from the Stripe API into my custom interface?" or "How do I do this thing in SQL". I trust these answers because I can verify by reading and running the actual code.

For more open ended questions I tend to treat it more like a random comment in a forum. For example, I often notice that Typescript code examples don't use the `function` keyword often, they tend to use anonymous functions like `const func = () => blah`. I asked ChatGPT why this is and it gave a plausible answer, I have no idea if what it's saying is true, but it seemed true enough. I give the answer the same amount of trust as I would some random comment on Stack Overflow. The benefit of Stack Overflow though is at least you know the reputation of the person you're talking to.


They asked you questions too, y’know…


Guess my brain skipped over that part. Thanks for pointing that out -- updating my answer


> If you don't know the subject, how can you be sure what it's telling you is true?

People are reading too much into the comment. You wouldn't use ChatGPT to become as knowledgeable as obtaining a PhD. The idea is "If I wanted to ask an expert something, I have easy access to one now."

The real questions are:

1. For a given domain, how much more/less accurate is ChatGPT?

2. How available are the PhDs?

It makes sense to accept a somewhat lower accuracy if they are 10 times more available than a real PhD - you'll still learn a lot more, even though you also learn more wrong things. I'll take a ChatGPT that is accurate 80% of the times and is available all day and night vs a PhD who is accurate 90% of the times but I get only 30 minutes with him per week.


> If you don't know the subject, how can you be sure what it's telling you is true?

That applies to any article, book, or a verbal communication with any human being, not only to LLMs


This is a pointless whataboutism, but I'll humor you.

I can pick up a college textbook on interval calculus and be reasonably assured of its veracity because it's been checked over by a proofreader, other mathematicians, the publisher, and finally has been previously used in a classroom environment by experts in the field.


It's unfortunate but the vast amount of human literature is not up to those standards.


The vast amount of human literature is not worth a read. As long as you pick reputable sources, read great books and so on, they will be up to those standards.

Of course, it's not a trivial task to find the reputable sources and the great books about a subject you don't know about. But there are many ways to find that out, for example by checking out the curriculum of respected universities to see which textbooks they use.


> I can pick up a college textbook on interval calculus and be reasonably assured of its veracity because it's been checked over by a proofreader, other mathematicians, the publisher, and finally has been previously used in a classroom environment by experts in the field.

Well, even a very popular scientific theory, that supported by the whole consensus of academic society at its time, could be proved wrong decades later.


> Well, even a very popular scientific theory, that supported by the whole consensus of academic society at its time, could be proved wrong decades later.

Oddly enough that's usually only the case for big theories, but not for everything. You'd hard pressed to prove wrong our understanding on how to build bridges, for example.

Would you live in the skyscraper designed by chatgpt?


> If you don't know the subject, how can you be sure what it's telling you is true?

The same question could be asked when we're learning through books or an expert. There's no guarantee that books or experts are always spitting out the truth.


How do you know what a PhD is telling you is truth?

Unlike the PhD, the AI model has benchmark scores on truthfulness. Right now, they're looking pretty good.


How do we know anything is true??!

Seriously, you're veering into sophistry.

People have reputations. They cite sources. Unless they're compulsive liars, they don't tend to just make stuff up on the spot based on what will be probabilistically pleasing to you.

There are countless examples of ChatGPT not just making mistakes but making up "facts" entirely from whole cloth, not based on misunderstanding or bias or anything else, but simply because the math says it's the best way to complete a sentence.

Let's not use vacuous arguments to dismiss that very real concern.

Edit: As an aside, it somehow only now just occurred to me that LLM bullshit generation may actually be more insidious than the human-generated variety as LLMs are specifically trained to create language that's pleasing, which means it's going to try to make sure it sounds right, and therefore the misinformation may turn out to be more subtle and convincing...


The way in which this kind of error deviates from what a human would do is generally trivial: “confidently stating bs” is the same as how mistakes from human professionals often manifest—it will be this way anytime the person doesn’t realize they’re making a mistake.

The only real difference is that you’re imputing a particular kind of intention to the ai whereas the human’s intention can be assumed good in the above scenario. The BS vs unknowing falsehood distinction is purely intention based, a category error to attribute to an llm.


> The way in which this kind of error deviates from what a human would do is generally trivial

That's not even remotely true and if you've worked with these technologies at all you'd know that. For example, as I previously mentioned, humans don't typically make up complete fiction out of whole cloth and present it as fact unless those humans possess some sort of mental illness.

> The only real difference is that you’re imputing a particular kind of intention to the ai

No, in fact I'm imputing the precise opposite. These AIs have no intention because they have no comprehension or intelligence.

The result is that when they generate false information, it can be unexpected and unpredictable.

If I'm talking to a human I can make some reasonable inferences about what they might get wrong, where their biases lie, etc.

Machines fail in surprising, unexpected, and often subtle ways that make them difficult for humans to predict.


I don’t think you’re intending to impute intention, it’s just an implication of statements you made: “making stuff up on the spot” and “bullshit generation” vs unknowingly erring—these are all metaphors for human behaviors differing in their backing intention; your entire message changes when you use some form of “unknowingly erring“ instead, but then you lose the rhetorical effect and your argument becomes much weaker.

> that's not even remotely true and if you've worked with these technologies at all you'd know that

I have spent a good amount of time working with llms, but I’d suggest if you think humans don’t do the same thing you might spend some more time working with them ;)

If you try to you can find really bad edge cases, but otherwise wild deviations from truth in a otherwise sober conversation with eg chatgpt rarely occur. I’ve certainly seen it in older models, but actually I don’t think it’s come up once when working with chatgpt (I’m sure I could provoke it to do this but that kinda deflates the whole unpredictability point; but I’ll concede if I had no idea what I was doing I could also just accidentally run into this kind of scenario once in a while and not have the sense to verify)

> If I'm talking to a human I can make some reasonable inferences about what they might get wrong, where their biases lie, etc.

Actually with the right background knowledge you can do a pretty good job reasoning about these things for an llm, whereas you may be assuming you can do it better for humans in general than the reality of the situation


YouTube, Twitter, Facebook, newspapers, television, and auditoriums are filled with people that fill the world with pleasing sounding and utterly incorrect, or misleading, content. Humans are very good at convincing others their lies are true.


People don’t lie (“hallucinate”) in the way that LLMs do. If you’re having a friendly chat with a normal person they’re not going to start making up names and references for where they learned some fact they just made up.

Edit: Please stop playing devils advocate and pay attention to the words “in the way that LLMs do”. I really thought it would not be necessary to clarify that I know humans lie! LLMs lie in a different way. (When was the last time a person gave you a made up URL as a source?) Also I am replying to a conversation about a PhD talking about their preferred subject matter, not a regular person. An expert human in their preferred field is much more reliable than the LLMs we have today.


It's not about humans lying. It's about our memory getting corrupted over time where the stuff we think we're sure of is actually wrong or a misrepresentation of facts. Our recollection of things is a mix of real things and hallucinations. Witnesses provide wildly different accounts of the same event all the time.

This applies to PhDs as well and I don't agree that an expert human is automatically more reliable.


Are you sure about that? I can't count the number of times I've heard people spout marketing copy, word for word, to me while they think it's 100% true.


Are we talking about a conversation with a PhD in their preferred subject matter or not? That’s the line of argument I was responding to. I feel like as soon as we talk about LLMs the devils advocates come out of the woodwork.


While your basic point here is solid, the difference is that I am fairly sure you could count the number of times, if it actually mattered to you.


Some people do, but we don't consider them to be good members of society.


Yes this is why I specified “having a friendly chat with a normal person.”


People even misremember basic things like who they voted for in the past. Unfortunately I cannot find the study know.


See, that's where chatGPT would have confidently made up an URL to a made up story instead of recognizing its limitations.


They definitely do. I do all the time where I start explaining something just to realize that I'm actually not sure anymore but then it's often too late and the best I can do is add a disclaimer but most people don't.


Humans hallucinate all the time - first they consume propaganda/conspiracy theory and tell you lies while thinking they are right, and everybody else is wrong


A PhD will tell you if you're asking the wrong question. Human empathy allows us to intuit what a person's actual goals might be and provide a course correction.

For example, on Stack Overflow you'll see questions like how do I accomplish this thing, but the best answer is not directly solving that question. The expert was able to intuit that you don't actually want to do the thing you're trying to do. You should instead take some alternative approach.

Is there any chance that models like these are able to course correct a human in this way?


Jeesh, don't bring this up, you're apt to get ten people arguing about the X,Y problem instead, and why you should or shouldn't to 10 other things, rather than ask the user if they are on a legacy system where they can't make major changes.


My experience has been that the answers are very convincing, but not necessarily true. I would be careful asking gpt questions about abstract knowledge, less about linguistic structure.


That's exactly it. The bot espouses facts with the same tone of confidence regardless of whether they're true or entirely fictional.

I understand it has no sense of knowledge-of-knowledge, so (apparently) no ability to determine how confident it ought to be about what it's saying — it never qualifies with "I'm not entirely sure about this, but..."

I think this is something that needs to be worked in ASAP. It's a fundamental aspect of how people actually interact. Establishing oneself as factually reliable is fundamental for communication and social cohesion, so we're constantly hedging what we say in various ways to signify our confidence in its truthfulness. The absence of those qualifiers in otherwise human-seeming and authoritative-sounding communication is a recipe for trouble.


This is a particular alignment issue. People are used to people spouting bullshit all the time, as long as it's aligned to what we are used to. Take religion for example. People tend to be very confident around the unknowable there.

It is scary in the sense that people love following confident sounding authoritarians, so maybe AI will be our next world leader.


Presidential speech writers are quaking in their boots.


They weren't true in past iterations. Since the new version is 10x as accurate (if you believe the test score measures, going from bottom 10% score to top 10%), we're going to see a lot less confident falseness as the tech improves.


I don't think ChatGPT should be trusted at all until it can tell you roughly how certain it is about an answer, and that this self-reported confidence roughly correponds to how well it will do on a test in that subject.

I don't mind it giving me a wrong answer. What's really bad is confidently giving the wrong answer. If a human replied, they'd say something like "I'm not sure, but if I remember correctly..", or "I would guess that..."

I think the problem is they've trained ChatGPT to respond condidently as long as it has a rough idea about what the answer could be. The AI doesn't get "rewarded" for saying "I don't know".

I'm sure the data about the confidence is there somewhere in the neural net, so they probably just need to somehow train it to present that data in its response.


I'm very excited for the future wave of confidently incorrect people powered by ChatGPT.


We've had this before Chat and we'll have this after Chat.


That's as useless of a statement as saying we had <insert_anything> before and we have <insert_same_thing> now.


oh... 100% it's a useless statement, but what else can be said to your comment?


The point was quantity is important. Of course a lot of things were there before, but the same things being more common now would be worse.


"The existence of ChatGPT does not necessarily make people confidently incorrect."

- ChatGPT


You're going to get confidently incorrect arguments on the internet straight from ChatGPT without the human filter.


Its a difficult job, but it gets me by


But it often produces wrong information. If you don't know the subject (since you are learning), how do you distinguish between correct information and incorrect but very plausible-sounding information?


The same way anyone lacking knowledge can confident say that they got the right information from anyone with experience: You don't. You just trust them. That's what I did with my gastrenterologist, I ended up got misdiagnosed for 4 years and instead of getting the treatment that I should be getting I lost weight, got osteoporosis and vitamin D deficiency.

4 years later the second doctor asked me "I wonder why did my colleague decided not to take a tissue sample from insert some place in the stomach. I said out loud "I didn't even know what that is, let along ask him why he didn't".


> The same way anyone lacking knowledge can confident say that they got the right information from anyone with experience: You don't.

No, that's not the same way that anyone lacking knowledge gains confidence in the things that others tell them.

A technique one can use instead of blindly trusting what one person may tell us is seeking out second opinions to corroborate new info. This works for many things you might not have personal experience with: automobiles, construction, finance, medicine, &c.


I had a neurologist prescribe me medications which I didn’t need and which permanently damaged my side vision. Doctors are people too, and all people make mistakes sometimes. It has taught me to always ask a second opinion when it matters. The same maxim applies to chatgpt: when the accuracy matters, look for independent confirmation.


I was misdiagnosed with the 'common' diagnosis by 3 physicians, 2 NP, 2 PAs, and 1 specialist. 8 years...

Some random redditor ended up figuring it out. Then every physician from that point forward agreed with the diagnosis.

Licensed based medicine :(


Although the technology occasionally produces incorrect information, I still find it to be a helpful learning tool. I break down the information into bullet points and cross-check it with other sources to differentiate between accurate and inaccurate information--I know this isn't infallible. One of the advantages of using this technology is that it often presents me with new and intriguing information, which I might not have found otherwise. This allows me to ask new questions and explore the subject matter more profoundly, resulting in a better understanding and an opportunity to create a mental model.


Besides the fact that this comment reads written by GPT itself, using this particular AI as a source for your education is like going to the worse University out there.

I am sure if you always wishes do thave a personal PhD in a particular subject you could find shady universities out there who could provide one without much effort.

[I may be exagerating but the point still stands because the previous user also didn't mean a literal PhD]


I don't think that's the user's intended meaning of "personal PhD," ie they don't mean a PhD or PhD level knowledge held by themselves, they mean having a person with a PhD that they can call up with questions. It seems like in some fields GPT4 will be on par with even PhD-friends who went to reasonably well respected institutions.


exactly


This comment (this one right here) wasn't written with GPT, but I did have the other one refined by it. I think in elongated thoughts and a lot of continuations, which makes me a bit shy of my writings. Because of that, I use it to help me find different ways to improve my writing.

I live near UCI and yes, I can find one, but at a sizable cost. I'm not opposed to that, but it's still a good chunk of money.


ChatGPT won't really help you improve your writing. It's got a terribly standard and boring voice. Most of the time generates 5 paragraph essays that make it super easy to sniff out. It might give you a couple common words it found in its training data to use, but you should stick to your elongated thoughts. Reading your writing out loud and editing will be just as good if not better than ChatGPT. Your comment here is pretty good. The first reply you made sounds... soulless.


> like going to the worse University out there.

...without going anywhere.

Wikipedia isn't great compared to a degree from a top university, but it's also readily available and is often a first reference for many of us.


You can't do that yet due to factuality issues, but that's the goal... the future of learning will radically change


Im actually interested in becoming a private pilot. ChatGPT pointed me to the proper reading material to get started and I’m going through that, using ChatGPT to clarify various concepts I misunderstand or poorly understand. Its been an amazing supplement to my learning.

I can ask it about the certification process, what certified pilots can and can’t do, various levels of certification, etc.


I'm fantastically excited about how it will help people who learn differently than the standard academic model.


I do the same with the writing style! (not in this case)

.... maybe.


it makes shit up still




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: