Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think there are 2 interesting aspects: speed and scale.

To explain the scale: I am always fascinated by the way societies moved on when they scaled up (from tribes to cities, to nations,...). It's sort of obvious, but when we double the amount of people, we get to do more. With the internet we got to connect the whole globe but transmitting "information" is still not perfect.

I always think of ants and how they can build their houses with zero understanding of what they do. It just somehow works because there are so many of them. (I know, people are not ants).

In that way I agree with the original take that AGI or not: the world will change. People will get AI in their pocket. It might be more stupid than us (hopefully). But things will change, because of the scale. And because of how it helps to distribute "the information" better.



To your interesting aspect, you're missing the most important (IMHO): accuracy. All 3 are really quite important, missing any one of them and the other two are useless.

I'd also question how you know that ants have zero knowledge of what they do. At every turn, animals prove themselves to be smarter than we realize.

> And because of how it helps to distribute "the information" better.

This I find interesting because there is another side to the coin. Try for yourself, do a google image search for "baby owlfish".

Cute, aren't they? Well, turns out the results are not real. Being able to mass produce disinformation at scale changes the ballgame of information. There are now today a very large number of people that have a completely incorrect belief of what a baby owlfish looks like.

AI pumping bad info on the internet is something of the end of the information superhighway. It's no longer information when you can't tell what is true vs not.


> I'd also question how you know that ants have zero knowledge of what they do. At every turn, animals prove themselves to be smarter than we realize.

Sure, one can't know what they really think. But there are computer simulations showing that with simple rules for each individual, one can achieve "big things" (which are not possible to predict when looking only to an individual).

My point is merely, there is possibly interesting emergent behavior, even if LLMs are not AGI or anyhow close to human intelligence.

> To your interesting aspect, you're missing the most important (IMHO): accuracy. All 3 are really quite important, missing any one of them and the other two are useless.

Good point. Or I would add alignment in general. Even if accuracy is perfect, I will have a hard time relying completely on LLMs. I heard arguments like "people lie as well, people are not always right, would you trust a stranger, it's the same with LLMs!".

But I find this comparison silly: 1) People are not LLMs, they have natural motivation to contribute in a meaningful way to society (of course, there are exceptions). If for nothing else, they are motivated to not go to jail / lose job and friends. LLMs did not evolve this way. I assume they don't care if society likes them (or they probably somewhat do thanks to reinforcement learning). 2) Obviously again: the scale and speed, I am not able to write so much nonsense in a short time as LLMs.


> But things will change, because of the scale

Yup!

Plus we can't ignore the inherent reflexive + emergent effects that are unpredictable.

I mean, people are already beginning to talk like and/or think like chatGPT:

https://arxiv.org/pdf/2409.01754




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: