Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can still regurgitate a chain of thought response…

It’s all still tokens…



You can still regurgitate a chain of thought response…

You people are so close to getting it. So close to understanding that you're the ones doing the regurgitating.


What's with AI boosters and not viewing other people as human?


In my experience it's not even dismissing the humanity of others, it's recognizing their own minds following similar patterns.

In my youth I lacked the confidence to speak without a sentence "pre-written" in my mind and would stall out if I ran out of written material. It caused delays in conversation and lagging sometimes minutes behind the chatter of peers.

Since I've gained more experience and confidence in adulthood I can talk normally and trust that the sentence will "work itself out" but it seems like most people gloss over that implicit trust in their own impulses. It really gets in the way to be too self-conscious so I can understand it being something most people would benefit from being able to ignore...selfishly, at least. Lots of stupidity from people not thinking through the cumulative/collective effects of their actions if everyone follows the same patterns, though.


I think a lot of this confidence that the sentence will "work itself out" has to do with being able to frame a general direction of the thought before you start, but not have the precise sentence. It takes advantage of the continual parallel processing humans perform in their brain. Confidence in a simple structure of what you expect to convey. When LLMs are able to generate this kind of dynamic structure from a separate logical/heuristic process + fill in the blanks efficiently, then I think we are getting close to AGI. That's a very Chomsky informed view of sentence structure and human communication, but I believe it is correct. Currently the future tokens are dependent on the probabilistic next token rather than having the outline of a structure determined from the start (sentence or idea structure). When LLMs are able to incorporate the structured outline I think we will be much closer to an AGI, but that is an algorithmic problem we have not been able to figure out and one that may not be feasible until we have parallel processing equivalent to a human brain.


AI bros have a vested interest in people believing the hype that we're just around the corner of figuring out AGI or whatever the buzzword of the week is.

They'll anthropomorphize LLMs in a variety of ways, that way people will be more likely to believe it's some kind of magical system that can do everyone's jobs. It's also useful when trying to downplay the harm these systems cause, like the mass theft of literally everything in order to facilitate training (the favorite refrain of "They learn just like humans do!" - By consuming 19 trillion books and then spitting out random words from those books, yeah real humanlike), boiling entire lakes to power training, wasting billions of dollars etc.

Many of them are also solipsistic sociopaths that believe everyone else is just a useful idiot that'll help make them fabulously wealthy, so they have to bring everyone else down in order for the AIs to seem useful beyond the initial marketing coolness.


Why do you believe that is how humans reason?


Neuroscience and evidence be damned, [the brain is a computer] is being transformed into [the brain is an LLM]. Happens with every new technology.

Edit: What is happening with people these days? They seem to be reducing people and their minds to machines more easily than ever.


The brain is a neural net. And so is the LLM.


> The brain is a neural net

No it isn't, neural networks aren't a network of neurons, they are just named after neurons but they are nothing alike. Neurons grow new connections etc and do a whole slew of different things that neural networks doesn't do.

The ability to grow new connections seems like an integral part to intelligence that neural networks can't replicate, at least not without changing them to something very different than they are today.


The brain is a neural net. It’s made up of neurons that are networked. You’re just pointing out differences between a LLM and a brain. The fact that they are both neural nets is a similarity.

Forming new connections Is not integral. Connections don’t form in seconds so second to second existence is a static neural network. Thus your second to second existence is comparable to an LLM in the sense that both neural networks don’t form new connections.

We can though. We can make neural networks form new connections.


> The brain is a neural net. It’s made up of neurons that are networked

The brain neurons are not the same kind of object as neural net neurons, hence its not a neural net. It is like saying a dragonfly is a helicopter since the helicopter were designed to look a bit like a firefly, no a dragonfly isn't a helicopter they fly in totally different ways the dragonfly has wings and thus don't even move similarly.

> Connections don’t form in seconds so second to second existence is a static neural network

But a human isn't intelligent over a second, and synapses do form over longer periods which is what it takes for humans to solve harder problems.

> We can though. We can make neural networks form new connections.

I said the neural network couldn't form those, not that we couldn't alter the neural network. That is a critical difference, our brains updates itself intelligently on its own, that is integral to its function, if our brains couldn't do that we wouldn't be intelligent, when that happens we say a person has Alzheimer's. You wouldn't hire a person with far gone Alzheimer's precisely because they can't learn.

Edit: So you can see in brains being able to form connections makes it smarter. In our neural nets when we try to alter their connections live they get dumber, which is why we don't do it. That should tell you that we are clearly missing something critical here.


>But a human isn't intelligent over a second, and synapses do form over longer periods which is what it takes for humans to solve harder problems.

So you’re saying as you talk to me right now you’re not conscious? Neurons didn’t grow connections right now so you’re not alive? You won’t form connections in an hour so during that hour your some kind of robot and suddenly when you form a connection you’re not?

We have algorithms that can self evolve neural networks in computers on its own. So you’re actually wrong here there’s progress on this front.

I am not missing anything critical. I know there are huge differences between the human brain and an LLM but the statement remains true. Both are neural networks. The problem here is that you either don’t understand vocabulary or you think everyone around you is so dumb they can’t see the differences.

Everyone and I mean everyone knows what you’re saying in the paragraph above. It’s common knowledge anyone can pull out their ass and your regurgitating it as if your a scientist or expert. Get this, BOTH are still neural networks and both have commonalities despite the distinctions. Both are also Turing complete.


> The brain is a neural net. It’s made up of neurons that are networked

Fun fact. Neural nets are not made of neurons and more than the python programming language is made of snakes.


False fact.


It isn't that simple, the substrate is totally different. The brain is not a GPU, not by any stretch, even if it "runs something analogue to a LLM" inside.


Of course it’s not that simple. Who on the face of the earth doesn’t know the brain is not a gpu? Think really hard on this. Isn’t what you stated obvious information to a programmer?

The neuron in ML is based off of the simplest mathematical model of what a neuron does. The neuron in a human brain is a physical realization of this with much much more complexity where much of this complexity is arbitrary as a side effect of being physically realized.

But again I’m regurgitating what everyone knows


You've said it, everyone knows this.

You can't build a model out of an object and then say that said object is an instance of the model. The model is an useful derived abstraction that sometines can serve for better understanding reality and making predictions (emphasis in sometimes), but it is not reality itself.


Only in name, given by people in machine learning. Really it’s more accurate to say they’re both networks, except one is a million times more complicated in its design.


Right, and again. You’re stating obvious information that usually doesn’t need to be stated.

The fundamental mathematical concept of a neuron is what both neural networks operate on in the end.


What’s the mathematical concept? A connected graph with weights and activations functions? You’re making gigantic simplifications of our brain, most of which isn’t even understood let alone “obeys a fundamental mathematical concept”.


These days it's looking like closer to a factor a thousand than a million.

Appearances can be deceptive and intelligence in brains may be more complex than they currently looks, but appearances are also the only thing most of us can judge it by at this point — while there's more stuff going on in living cells, nothing I've seen says the stuff needed to keep the cells themselves alive is directly contributing to the intelligence of the wet network in my skull.

It's quite surprising that such a simple mind as an LLM is so very capable. What is all the rest of our human brain doing, when the LLMs demonstrate that a mere cubic centimetre of brain tissue could be reorganised to speak 50 languages fluently, let alone all the other things?


This post is full of speculation. For starters, human minds do more than just language. There are some arguments (Moravec's paradox) that posit that language is easy, while keeping the rest of the body working is the real hard problem. I've yet to see an humanoid robot controlled by an LLM that can tackle difficult physical problems like riding a bike or skiing in for the first time real-time, just as humans, that need to coordinate at least 5 senses (some say up to 20 but I digress) plus emotions and a train of thought to do so.

Plus, sure, LLMs are simpler (that only works if you don't count the full complexity of its substrate: a lot of GPUs, interconnects, Data Centers, etc), yet they consume an insane amount of energy and matter when compared to humans. It is surprising that LLMs are so capable, but for 2000kcal a day humans are still more impressive to me.

People see a model dominating language and start thinking the only thing humans do is high-level language and reasoning. That may be why the brilliant minds of this decade talk about replacing white-collars, but no plate-cleaning bot in sight.

Metacommentary: a lot of LLM-human equality apologists seem to have little knowledge of the AI field and its history, as these themes are not new at all.


There is no speculation. The speculation was added by your own imagination. The LLM and human brain are neural nets. No speculation here, the statement is true despite what you said.


> For starters, human minds do more than just language.

These days, so do the AI (even though these ones are still called LLMs, which I think is now a bad name even though I continue to use the term without always being mindful of this).

> I've yet to see an humanoid robot controlled by an LLM that can tackle difficult physical problems like riding a bike or skiing in for the first time real-time, just as humans, that need to coordinate at least 5 senses (some say up to 20 but I digress) plus emotions and a train of thought to do so.

1) I'd agree that AI are slow learners (as measured by number of examples rather than wall clock). For me, this is more significant than the difference in wattage.

2) 13 years ago, before the Transformer model: https://www.youtube.com/watch?v=mT3vfSQePcs

3) We don't use 5 senses to ride a bike; I can believe touch, vision, proprioception, and balance are all involved, but that's only 4. Why would hearing, smell, or taste be involved?

For emotion, my weakly-held belief is that this is what motivates us and tells us what the concept of "good" even is — i.e. it doesn't give us reasoning beyond being a motivation to learn to reason.

> It is surprising that LLMs are so capable, but for 2000kcal a day humans are still more impressive to me.

Our biological energy efficiency sure is nice, though progress with silicon is continuing even if the rate of improvement is decreasing: https://en.wikipedia.org/wiki/Koomey%27s_law

That said, I don't consider the substrate efficiency to be an indication of if the thing running on it is or isn't "reasoning" — if the same silicon was running a quantum chemistry simulation of the human brain it would be an even bigger power hog and yet I would say it "must be" reasoning, and conversely we use phrases such "turn your brain off and enjoy" to describe categories of film where the plot holes become pot holes if you stop to think about them (the novel I'm trying to write is due to me being nerd-sniped in this way by everything wrong with Independence Day).

> That may be why the brilliant minds of this decade talk about replacing white-collars, but no plate-cleaning bot in sight.

I mean, I've got a dishwasher already… ;)

But more seriously, even though I'm getting cynical about press releases that turn out to be smoke and mirrors, (humanoid) robots doing housework is "in sight" in at least the same kind of way as self driving cars (which has been a decade of "next year honest" mirages, so I'm not giving a timeline): https://www.youtube.com/watch?v=Sq1QZB5baNw

> Metacommentary: a lot of LLM-human equality apologists seem to have little knowledge of the AI field and its history, as these themes are not new at all.

I've been feeling the same, but on both sides, pro and con. The Turing paper laid out all the same talking points I've been seeing over the last few years, and those talking points weren't new when he wrote them down.


> For emotion, my weakly-held belief is that this is what motivates us and tells us what the concept of "good" even is — i.e. it doesn't give us reasoning beyond being a motivation to learn to reason.

I've found I work differently. Some technical problems like architecture can only be solved by aligning my emotions with the problem, then my primal brains find a good solution that matches my emotional wants.

But when my emotions doesn't care about the problem my intuition stops working to solve it, and then I can't find any solution.

Why would emotions be needed to solve problems? Because without emotions your can't navigate a complex solution space to find a good solution, its impossible. Meaning if we want an AI to make good architecture for example, we would need to make the AI have feelings about architecture such as what setups are good in different situation etc. Without those feelings the AI wouldn't be able to solve the problem well.

Then after you have came up with a solution using emotions you can then verify it using logic, but you wouldn't have came up with that solution in the first place without using your emotions. Hence emotions are probably needed for truly intelligent reasoning, as otherwise you can't properly guide yourself through complex solution spaces.

You could code something that fills the same role and call it something different than emotions, but if it works the same way and fills the same functions its still emotions.

(Using this definition then AlphaGo uses emotions to prune board states to check, stuff like that, it doesn't know those states are best to check, its just a feeling it has)


1. For me transformer multimodal processing is still language, as the work is done via tokens/patches. In fact, that makes its image processing capabilities limited in some scenarios when compared to other techniques.

2. When on a bike you need to be full alert and hearing all the time or you might provoke an accident. Sure you can use headphones, but it is not recommended. Also, that robot you showed is narrow AI, it should be out of discussion when arguing about complex end-to-end models that are supposedly comparable to humans. If not, we could just code any missing capability as a tool, but that would automatically invalidate the "llms reason/think/process the same way as humans" argument.

3. Agree, I expect the same in the future. I'm talking about the present though.

4. ;) a robot that helps with daily chores can't come soon enough. More important than coding AI for me.

PS: Not sure if we're talking about the same argument anymore, I'm following the line of "LLMs work/reason the same as humans" not "AI can't perform as good as humans (now and in the future)"


> PS: Not sure if we're talking about the same argument anymore, I'm following the line of "LLMs work/reason the same as humans" not "AI can't perform as good as humans (now and in the future)"

Yeah, it's hard to keep track :)

I think we're broadly on the same page with all of this, though. (Even if I might be a little more optimistic on tokenised processing of images).


You want to talk evidence, let's talk evidence. What does the brain do that these models don't (or, more generally, can't)?

Be specific.

And no, it doesn't "happen with every new technology." Nothing even remotely like this has been seen before the present decade.


I'm no neuroscientist, but I hung out with a few, so here are some bits I picked up from them. In biological brians:

- neurons have signal surpression (-ive activation) as well as propagation (+activation)

- neurons appear to process and propagation due to unknown internal mechanisms.

- there are complex sub structures within biological neural nets where the architecture is radically different from in other sections of the network, strongly in contrast to the homogenous structure of ANN's

- many different types of neurons with different properties and behaviors in terms of network formation and network activity are present in BNN's in contrast to ANN's

- BNN's learn during processing. ANN's are static after training.


Those are mostly structure-versus-function arguments, centered on what a brain is as opposed to what it does. Only the static nature of the current ML models seems like a valid point... and it's changing too.

I'd wager that future models (which admittedly may not look much like today's LLMs) will blur the lines between mutable context and immutable weights. When that happens, all bets are truly off.


> future models (which admittedly may not look much like today's LLMs) will blur the lines between mutable context and immutable weights.

"may not look much like today's LLMs" is really sweeping the whole point under the rug. The difference between what we are doing now with static LLM models and dynamic models will require algorithmic changes that have not yet been invented. That's in addition to the fact that the processing of a single neuron is completely parallel with respect to inputs. GPUs make them more parallel, but the analog devices we are trying to mimic are vastly more complicated than what we are using as a substitute.


Whatever. This ( https://eqbench.com/results/creative-writing-v2/deepseek-ai_... ) is intelligence in action, from an entity with an internal world model that's at least as valid as mine.

Either that, or it's witchcraft.

(Either that, or somebody with both time and talent to waste is taking the piss and pretending to be an AI.)

If anyone ever does succeed in updating weights dynamically on a continuous basis, things are going to get really interesting really quickly. Even now, arguments based on relative complexity are completely invalid. A few fast neurons are at least as good at 'thinking' as a lot of slow ones are.


Hah, "Whatever." Seriously?

The writing is still derivative meandering pulp. No need to invoke magic. It's cool that it can make complete paragraphs though. LLMs are a huge breakthrough. They're just not intelligent.

Updating weights on a continual basis requires processing weights productively on a regular basis. That's many flops per weight away from where we are in processing and bandwidth. The computation limitations still apply.


There’s agents now. It’s not just paragraphs. And LLMs can beat you on intellectual tasks. I don’t see how you can throw that word around as if you know your more intelligent when it clearly can be superior to you on many many tasks.


Metacommentary: I stopped arguing because this thread is reeking of AI bro non-sense, they can't argue with facts so they continue with vague statements and deflection (e.g like the "Whatever" you've got above). The GenAI bubble can't burst soon enough.

As I said, neuroscience be damned, we've got transformers now. (/s)


"My king, it's amazing these things can contain and regurgitate knowledge on demand. The next step is an army of Gollems that will destroy your enemies, all I need is three bags of gold." - Sargon the scribe, near Baghdad first month, 2024 BC

"My king, the Hittites have created scrolls that are wide and long despite that we burned their caravan taking palm leaves to the north. They have found a way to flense lambs to make pages themselves, and their ink is pure silver. They will soon have an army of Gollems and will destroy your empire. To stop this all I need is ten bags of gold." - Sargon the scribe, near Baghdad second month, 2024 BC

"My king, alas all the gold has gone, and yet neither will the gollems walk, nor have the Hittities come. My dancing girls grow cold, and I have no figs for my table all I need is 20 bags of gold" - Sargon the scribe, near Baghdad third month, 2024 BC

What happened to Sargon next is not recorded...


You can't win this argument through logic less mere rhetoric. You need scientific evidence. Hint: There isn't.

Edit: Ok, let me be less harsh and play with your argument. Can an LLM have an acid/psilocybin/ketamine trip, distort its view of self and reality, then rewire some of its internal connections and come a little different based on the experience? I guess not, there are more examples and they all show that as far as we know LLMs are not minds/brains and viceversa even if they seem similar in a chat interface. (If you don't empathise with that example remove the drugs and change them for a near-death experience).

I strongly argue humans are not (just) LLMs, but I think we're far from getting evidence*. I think we will get to AGI before we know how the brain works and there is no dicotomy in that, if anything the tech is showing us that we can get at least some intelligence in other substrates.

*PS: fwiw ausence of evidence does not support any sides. I shouldn't remark that, but here we are.


> You need scientific evidence

This, while true, is putting the cart before the horse.

One needs to define what the question is before it is possible to seek evidence.

I see many things different between Transformer models and brains, but I do not know which, if any, matters.

> Can an LLM have an acid/psilocybin/ketamine trip, distort its view of self and reality, then rewire some of its internal connections and come a little different based on the experience?

Specifically those chemicals? No, not even during training when the weights aren't frozen, as nobody bothered to simulate the relevant receptors*.

Can LLMs experience anything, in the way we mean it? Nobody even knows. We don't know what the question means well enough to more than merely guess what to look for.

Do the inputs to an LLM, a broader idea of "experience" without needing to solve the question of which definition of consciousness everyone should be using, rewire some of its internal connections? All the time.

* caveat: it doesn't, at first glance, seem totally impossible that the structure of these models is sufficiently complicated for them to create the simulation of those things in the weights themselves in order to better model the behaviour of humans generating text from experiencing those things. But I also don't have any positive reason to expect this any more than I would expect an actor portraying a near-death-experience to have any idea what that's like on the inside.


The question/argument was well defined by parent-grandparent posts: people reason as LLMs, including a suggestion that people discussing here are not being self-conscious of said similarity (i.e. people parroting/regurgitating). You can check it above.

I won't continue with these discussions as I feel people are being just obtuse about this and it is becoming more emotional than rational.


> The question/argument was well defined by parent-grandparent posts: people reason as LLMs, including a suggestion that people discussing here are not being self-conscious of said similarity (i.e. people parroting/regurgitating). You can check it above.

With a lack of precision necessary when the phrase "You need scientific evidence" is invoked.

It's very easy to trip over "common sense" reasoning and definitions — even Newton did so with calculus, and Euclid did so with geometry.

> I won't continue with these discussions as I feel people are being just obtuse about this and it is becoming more emotional than rational.

As you wish. I am relatively unemotional on this.


The brain can invent formal language that enables unambiguously specifying an algorithm that when provided with a huge amount of input data can simulate the output of the brain itself.

Can these models do that?


> The brain can invent formal language that enables unambiguously specifying an algorithm that when provided with a huge amount of input data can simulate the output of the brain itself.

Can *a* brain do this?

Every effort I've seen has (1) involved *many people* working on different parts of the problem, and (2) so far we've only got partial solutions.

Even if you allow collaboration, then when (2) goes away, your question is trivially answered "yes" because we just feed into the model the same experiences that were fed into the human(s) who invented the model, and the simulation then simulates them inventing the same model.


> Even if you allow collaboration, then when (2) goes away, your question is trivially answered "yes" because we just feed into the model the same experiences that were fed into the human(s) who invented the model, and the simulation then simulates them inventing the same model.

These models doesn't work that way. Its probably possible to make such a model, but currently we don't know how to make them.


I literally said so, yes.


Do you have empirical proof any LLM can generate another program that can be trained to simulate it? Otherwise it is science fiction.


Misread me, but as it happens, I might have seen exactly that.

I didn't ask for it, one of the Phi models got confused part way though and started giving me a python script for machine learning instead of what I actually asked for.

I'm saying I have no evidence that humans have ever demonstrated this capability either, and if we were to invent such an algorithm and model (even more generally than LLM) the AI that is this would by definition necessarily have to pass this test.


Have experiences and feelings. Have personal knowledge of something.

Be able to express language with a training/knowledge/etc corpus of far smaller size.


A lot of that is provided by multi-modality including feedback from the body and interacting with objects and people with more than just the words. That expands the context of a human's experiences dramatically compared to just reading books.

Plus even when humans are reading books a mental image of what's going on in the story is common. Not everyone has that, but it shows how much a basic LLM lacks and multi-modal would add.

Now the real question to my mind is whether we can train models with actual empathy to learn from the experiences of other people without having to go through the experience directly. Doing so would put them above many individuals' understanding already. . .


While I agree with:

> Be able to express language with a training/knowledge/etc corpus of far smaller size.

Nobody knows what it would mean to test for:

> Have experiences and feelings.

And I am unclear if:

> Have personal knowledge of something.

Would or would not be met by putting them in charge of a robot of some kind to collect experiences?

I'd certainly agree that the default is to make them "book smart" rather than "street smart", but at most that's an "is not" rather than "cannot".


Equating human reasoning to regurgitating a single token at a time requires that you pretend there are not trillions of analog calculations happening in parallel in the human mind. How could there not be a massive search included as part of the reasoning process? LLMs do not perform a massive search, or in any way update their reasoning capability after the model is generated.


That sword cuts both ways.


I mean, you talk about a predictable, deterministic next-token generator...


Oh stop. The neural network is piecing together the tokens in a way that indicates reasoning. Clearly. I don't really need to say this, we all know it now and so do you. Your statement here is just weak.

It's really embarrassing the stubborn stance people were taking that LLMs weren't intelligent and wouldn't make any progress towards agi. I sometimes wonder how people live with themselves when they realize their wrong outlook on things is just as canned and biased as the hallucinations of LLMs themselves.


Personal attacks really make your argument stronger.


I said your statement was weak I didn't say YOU were weak. Don't take it personally. It wasn't meant to be that way. If you take offense then I apologize.

That being said, my argument was a statement about a general fact that is very true. The sentiment not too long ago was these things are just regurgitators with no similarity to human intelligence.

I think it's clear now that all the previous claims were just completely baseless.


> That being said, my argument was a statement about a general fact that is very true.

Is this a general fact that is very true? It sounds like you are judging rather than stating a fact.

> It's really embarrassing the stubborn stance people were taking that LLMs weren't intelligent and wouldn't make any progress towards agi. I sometimes wonder how people live with themselves when they realize their wrong outlook on things is just as canned and biased as the hallucinations of LLMs themselves.


Yeah it is. Judgments can be true. Where is my judgement not true? It’s embarrassing to be so insistent on being right then finding out they’re so wrong.

Also I made a broad and general statement. I never referred to you or anyone personally here. If you got offended it only meant the judgement correctly referred to you and that you fit the criteria. But doesn’t that also make what I said true?


> If you got offended it only meant the judgement correctly referred to you

If you say "I hate all these black criminals I see everywhere" people will get offended even if they aren't black criminals, so your reasoning here is flawed.

And note how that is true even if there are a lot of black criminals around.

Why do people get upset over that? Its because they feel you judge more people as criminals than actually are. And its the same with your statement there, the way you phrased it it feels like you judge way more people than actually fit your description. If you want people to not judge you that way then you have to change how you write.


Right but HN is above that. Or so I thought? I thought this was a forum that people intelligently debate concepts and are able to maturely separate that from emotions. Are you saying you’re not above that? That if someone points something out that’s true about you that you don’t like to hear you get offended and emotional?

If that is the case then I apologize to you for saying something that is true about you and you don’t like to hear or face. Apologies, truly.


I don't think anyone has said LLMs wouldn't make any progress toward AGI, especially of researchers in the field. But a small piece of progress toward AGI is not the same as AGI.


Reasoning is a human, social and embodied activity. TFA is about machines that output text reminiscent of the results of reasoning, but it's obviously fake since the machine is neither human, social or embodied.

It's an attempt at fixing perceived problems with the query planner in an irreversibly compressed database.


That’s not what reason is. There’s nothing inherently human about reason.


I'm afraid it's not so clear. There are different perspectives on this.

For one, apart from humans, some animals, and now LLMs, there are but few entities that are able to apply reasoning. It may well be that reason is something that exists universally, but empirically this sounds a bit unlikely.


There's a sort of existential dread once man has created a machine that can perform on par with man himself. We don't want to face the truth so we move the goal posts and definitions.

To reason is now a human behavior. We moved the goal posts so we don't have to face the truth that AI crossed a barrier with the creation of LLMs. It doesn't really matter, there's no turning back now.


We can create beings "on par with man himself". If you ask around where you live someone will likely be able to show you some small ones and might perhaps introduce you to the activity that brings them to life.


Right, obviously man/child is on par with man himself.

I’m obviously talking about something on par with man himself but isn’t a man or even human. It is something else.


A non-human that can eat and gossip and make babies? What is this something?


A non human that is more intelligent than you and can outthink you and understand you better than you. That’s where the trendline is pointing.


I think the perspective is diametrically opposite to what you’re suggesting. It’s saying things that human do are not singular or sacrosanct. It’s a full acceptance that humanity will be surpassed.


Some people used to believe that. They imagined reason to be divine, hence the prime status of Aristotle and reason in roman catholicism.

But God is dead and has been for some time. You can try to change that if you want, but hitherto none of the prominent professional theologians have managed to wrestle with modern physics, feminist critique, the philosophies of suspicion or capitalism itself and reached a convincing win. Maybe you're the one, you should try and perhaps you'll at least become a free spirit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: