Hacker Newsnew | past | comments | ask | show | jobs | submit | trashtester's commentslogin

The fears from 3 Mile Island and Fukushima were almost completely irrational. The death toll from those was too low to measure.

And the fears from Chernobyl was MOSTLY irrational.

The reason for the extreme fears that are generated from even very moderate spills from nuclear plants comes in part from the association with nuclear bombs and in part from fear of the unknown.

A lot (if not most) people shut their rational thinking off when the word "nuclear" is used, even those who SHOULD understand that a lot more people die from coal and gas plants EVERY YEAR than have died from nuclear energy throughout history.

Indeed, the safety level at Chernobyl may have been atrocious. But so was the coal industry in the USSR. Indeed, even if just considering the USSR, the death toll from coal alone caused a similar number of deaths (or a bit more) than the deaths caused by Chernobyl EVERY YEAR [1].

[1] https://www.science.org/doi/pdf/10.1126/science.238.4823.11....


The tragedy of Chernobyl is it being seen as a failure of nuclear energy, rather than a failure of the Soviet government.


I think you're underselling it - the atrocious safety level at Chernobyl appears to still be an improvement on the coal industry held to a high standard. It is a horrible irony that the environmentalist movement managed to do such incredible damage to the environment by their enthusiastic attacks on nuclear.


The "next token prediction" is a distraction. That's not where the interesting part of an AI model happens.

If you think of the tokenization near the end as a serializer, something like turning an object model into json, you get a better understanding. The interesting part of a an OOP program is not in the json, but what happens in memory before the json is created.

Likewise, the interesting parts of a neural net model, whether it's LLM's, AlphaProteo or some diffusion based video model, happen in the steps that operate in their latent space, which is in many ways similar to our subconscious thinking.

In those layers, the AI models detect deeper and deeper patterns of reality. Much deeper than the surface pattern of the text, images, video etc used to train them. Also, many of these patterns generalize when different modalities are combined.

From this latent space, you can "serialize" outputs in several different ways. Text is one, image/video another. For now, the latent spaces are not general enough to do all equally well, instead models are created that specialize on one modality.

I think the step to AGI does not require throwing a lot more compute into the models, but rather to have them straddle multiple modalities better, in particular, these:

- Physical world modelling at the level of Veo3 (possibly with some lessons from self driving or robotics model for elements like object permananence and perception) - Symbolic processing of the best LLM's. - Ability to be goal oriented and iterate towards a goal, similar to the Alpha* family of systems - Optionally: Optimized for the use of a few specific tools, including a humanoid robot.

Once all of these are integrated into the same latent space, I think we basically have what it takes to replace most human thought.


>which is in many ways similar to our subconscious thinking

this is just made up.

- we don't have any useful insight on human subconscious thinking. - we don't have any useful insight on the structures that support human subconscious thinking. - the mechanisms that support human cognition that we do know about are radically different from the mechanisms that current models use. For example we know that biological neurons & synapses are structurally diverse, we know that suppression and control signals are used to change the behaviour of the networks , we know that chemical control layers (hormones) transform the state of the system.

We also know that biological neural systems continuously learn and adapt, for example in the face of injury. Large models just don't do these things.

Also this thing about deeper and deeper realities? C'mon, it's surface level association all the way down!


Yea whenever we get into this sort of “what’s happening in the network is like what’s going on in your brain” discussion people never have concrete evidence of what they’re talking about.


The diversity is itself indicative, though, that intelligence isn't bound to the particularities of the human nervous system. Across different animal species, nervous systems show a radical diversity. Different architectures; different or reversed neurotransmitters; entirely different neural cell biologies. It's quite possible that "neurons" evolved twice, independently. There's nothing magic about the human brain.

Most of your critique is surface level: you can add all kinds of different structural diversity to an ML model and still find learning. Transformers themselves are formally equivalent to "fast weights" (suppression and control signals). Continuous learning is an entire field of study in ML. Or, for injury, you can randomly mask out half the weights of a model, still get reasonable performance, and retrain the unmasked weights to recover much of your loss.

Obviously there are still gaps in ML architectures compared to biological brains, but there's no particular reason to believe they're fundamental to existence in silico, as opposed to myelinated bags of neurotransmitters.


>The diversity is itself indicative, though, that intelligence isn't bound to the particularities of the human nervous system. Across different animal species, nervous systems show a radical diversity. Different architectures; different or reversed neurotransmitters; entirely different neural cell biologies. It's quite possible that "neurons" evolved twice, independently. There's nothing magic about the human brain.

I agree - for example Octopus's are clearly somewhat intelligent, maybe very intelligent, and they have a very different brain architecture. Bees have a form of collective intelligence that seems to be emergent from many brains working together. Human cognition could arguably be identified as having a socially emergent component as well.

>Most of your critique is surface level: you can add all kinds of different structural diversity to an ML model and still find learning. Transformers themselves are formally equivalent to "fast weights" (suppression and control signals). Continuous learning is an entire field of study in ML. Or, for injury, you can randomly mask out half the weights of a model, still get reasonable performance, and retrain the unmasked weights to recover much of your loss.

I think we can only reasonably talk about the technology as it exists. I agree that there is no justifiable reason (that I know of) to claim that biology is unique as a substrate for intelligence or agency or consciousness or cognition or minds in general. But the history of AI is littered with stories of communities believing that a few minor problems just needed to be tidied up before everything works.



The bullet list is a good point, but:

> We also know that biological neural systems continuously learn and adapt, for example in the face of injury. Large models just don't do these things.

This is a deliberate choice on the part of the model makers, because a fixed checkpoint is useful for a product. They could just keep the training mechanism going, but that's like writing code without version control.

> Also this thing about deeper and deeper realities? C'mon, it's surface level association all the way down!

To the extent I agree with this, I think it conflicts with your own point about us not knowing how human minds work. Do I, myself, have deeper truths? Or am myself I making surface level association after surface level association, but have enough levels to make it seem deep? I do not know how many grains make the heap.


>This is a deliberate choice on the part of the model makers, because a fixed checkpoint is useful for a product. They could just keep the training mechanism going, but that's like writing code without version control.

Training more and learning online are really different processes. In the case of large models I can't see how it would be practical to have the model learn as it was used because it's shared by everyone.

>To the extent I agree with this, I think it conflicts with your own point about us not knowing how human minds work. Do I, myself, have deeper truths? Or am myself I making surface level association after surface level association, but have enough levels to make it seem deep? I do not know how many grains make the heap.

I can't speak for your cognition or subjective experience, but I do have both fundamental grounding experiences (like the time I hit my hand with an axe, the taste of good beer, sun on my face) and I have used trial and error to develop causative models of how these come to be. I have become good at anticipating which trials are too costly and have found ways to fill in the gaps where experience could hurt me further. Large models have none of these features or capabilities.

Of course I may be deceived by my cognition into believing that deeper processes exist that are illusory because that serves as a short cut to "fitter" behaviour and evolution has exploited this. But it seems unlikely to me.


> In the case of large models I can't see how it would be practical to have the model learn as it was used because it's shared by everyone.

Given it can learn from unordered text of the entire the internet, it can learn from chats.

> I can't speak for your cognition or subjective experience, but I do have both fundamental grounding experiences (like the time I hit my hand with an axe, the taste of good beer, sun on my face) and I have used trial and error to develop causative models of how these come to be. I have become good at anticipating which trials are too costly and have found ways to fill in the gaps where experience could hurt me further. Large models have none of these features or capabilities.

> Of course I may be deceived by my cognition into believing that deeper processes exist that are illusory because that serves as a short cut to "fitter" behaviour and evolution has exploited this. But it seems unlikely to me.

Humans are very good at creating narratives about our minds, but in the cases where this can be tested, it is often found that our conscious experiences are preceded by other brain states in a predictable fashion, and that we confabulate explanations post-hoc.

So while I do not doubt that this is how it feels to be you, the very same lack of understanding of causal mechanisms within the human brain that makes it an error to confidently say that LLMs copy this behaviour, also mean we cannot truly be confident that the reasons we think we have for how we feel/think/learn/experience/remember are, in fact, the true reasons for how we feel/think/learn/experience/remember.


As far as I understood any AI model is just a linear combination of its training data. Even if that were such a large corpus as the entire web... it's still just like a sophisticated compression of other's people's expressions.

It has not made its own experiences, not interacted with the outer world. Dunno, I won't to rule out something operating solely on language artifacts cannot develop intelligence or consciousness, whatever that is,.. but so far there are also enough humans we could care about and invest into.


LLMs are not a linear combination of training data.

Some LLMs have interacted with the outside world, such as through reinforcement learning while trying to complete tasks in simulated physics environments.


Just because humans can describe it, doesn't mean they can understand (predict) it.

And the web contains a lot more than people's expressions: think of all the scientific papers with tables and tables of interesting measurements.


> the AI models detect deeper and deeper patterns of reality. Much deeper than the surface pattern of the text

What are you talking about?


For now, the people able to glue all the necessary ingredients together are the same ones who can understand the output if they drill into it.

Indeed, these may be the last ones to be fired, as they can become efficient enough to do the jobs of everyone else one day.


Ironically, finding ways to spin stories like that IS one way of taking responsibility, even if it's only a way to take responsibility for the narratives that are created after things happen.

Because those narratives play an important role in the next outcome.

The error is when you expect them to play for your team. Most people will (at best) be on the same team as those they interact with directly on a typical day. Loyalty 2-3 steps down a chain of command tends to be mostly theoretic. That's just human nature.

So what happens when the "#¤% hts the fan, is that those near the top take responsbility for themselves, their families and their direct reports and managers first. Meaning they externalize damage to elsewhere, which would include "you and me".

Now this is baseline human nature. Indeed, this is what natural empathy dictates. Because empathy as an emotions is primarily triggered by those we interact with directly.

Exceptions exist. Some leaders really are idealists, governed more by the theories/principles they believe in than the basic human impulses.

But those are the minority, and this may aven be a sign of autism or something similar where empathy for oneself and one's immediate surrounding is disabled or toned down.


I was with you until you casually diagnosed a large group of people with autism using incorrect criteria.


It would not collapase. But it would shift some purchaing power from the middle class to the working class if all of them would leave, as working class salaries would go up even faster than the inflatino it would cause.


The middle class and the working class are the same thing. If you have to work to live, you are working class, it doesn't matter how much income you make or how many investment properties you own.

The whole working class/middle class divide was made up by the rich to get you to vote against your interests, and propped up by pick-mes who want to feel like they're better than someone.


Our economy absolutely would collapse. Our entire farming industry exists because of heavily abused immigrant labor, and is a job that Americans refuse to take. We've made multiple swings and attempts at getting Americans to do this work [1] but it's low pay, low benefits and grueling work. Farmers literally could not afford the actual salary needed to attract people to do said labor, and it would cause food prices across the US to skyrocket.

The only way this would stabilize is if the government came in and subsidized and socialized farm work heavily and that would also never happen.

[1] https://www.npr.org/sections/thesalt/2018/07/31/634442195/wh...


Of all illegals disappeared Thanos-style, the end result would be massively expensive certain crops, and a greater dependency on machine-farmable crops, like corn.


And some weird severe-but-short-term economic volatility.

Something along the lines of:

Now nobody is picking fruits, all the fruits die on the tree/vine, so there's none of that in the supermarket and those farms go bankrupt. Also, most of those who were paid to butcher the cattle are gone, but the cows are still there, costing the farmers money, so those farms go bankrupt. And then so do the feed suppliers for cattle farmers that don't ranch (or do but need extra feed besides the grass). But everyone still needs to eat, which means there's correspondingly more demand for the stuff which is heavily mechanised, so prices for that go way up, but because this is an instant supply shock the average person is still hungry no matter what the prices are, unless the humans start eating alfalfa en-masse.


Not only that, most of the construction and home services companies are usually the white American folks that come and give you a very inflated price and then send you the immigrants to do the actual hard work. It's crazy when you speak to the people doing the work how much they are getting paid vs how much you are paying.


Why would it not happen? It would be yet another opportunity for the God King to give handouts to his subjects.


> inflatino

please tell me that's not a typo /g


> working class salaries would go up even faster than the inflatino it would cause.

Good, working class salaries need to be high! We can't function as a society without all the people who dispose of our garbage, maintain our plumbing and water supply, grow our food, provide our electricity, and support our IT infrastructure.

Only one clarification: there is no difference between lower class, middle class and working class. White collar or blue collar, we are all collared.

The middle class construct is artificially invented so the owning class (people who don't have to work for their money) have something for the workers to aspire to. A software engineer with $300k/y is far closer wealth wise to a minimum wage worker in Mickey-D than they are to your Tim Cooks, Jeff Bezos and Adolf, pardon, Elon Musk.

In a Tesla, Prius or in a old Dodge, you are still stuck in traffic while they are flying with private jets.


We are all closer to being millionaires than Jeff Bezos.


Exactly. It would rebalance the value provided by blue collar work. They could finally demand a higher wage without being undercut by illegal workers.


I find it super funny that the average Joe will condemn an "illegal" worker for undercutting another, but never ask themselves the question who employed that "illegal" worker.

How come the blame is not solely on the businesses willingly employing thousands of "illegal workers", not paying tax on them nor health benefits, and not even minimum wage, and in that way undercutting the value of blue collar labor?


Presidents may not be able to pardon themselves, but they ARE immune from prosecution through the regular legal system for any actions taken as part of the office as president.

The only way to go after them (given the current SCOTUS, who made the ruling above), is impeachment. And for that, the president has to do something so bad that 67 senators are willing to find the president guilty.


Language models are closing the gaps that still remain at an amazing rate. There are still a few gaps, but if we consider what has happened just in the last year, and extrapolated 2-3 years out....


If the training data had a lot of humans saying "I don't know", then the LLM's would too.

Humans don't and LLM's are essentially trained to resemble most humans.


Authority, yes, accountable, not so much.

Basically at the level of other publishers, meaning they can be as biased as MSNBC or Fox News, depending on who controls them.


Wikipedia is one of the better sources out there for topics that are not seen as political.

For politically loaded topics, though, Wikipedia has become increasingly biased towards one side over the past 10-15 years.


source: the other side (conveniently works in any direction)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: