Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Firing Sam over concerns about safety and commercialization would be so myopic.

OpenAI’s commercial success is the engine that fuels their research and digs their moat.

Slowing down that engine means giving other companies, who may not share Sutskever et al’s safety concerns, the opportunity to catch up.

The best way for OpenAI to preserve AI safety is to stay far ahead of the competition. That’s the only way they can verify safety and install guardrails for the cutting edge.

The doomer board better hope that whichever company eventually surpasses OpenAI is as concerned about safety as they are.



This is a self-contradictory argument. Basically, you’re saying that OpenAI can’t risk acting on safety concerns or they may lose their market edge, in which case their safety concerns would be moot.

I don’t see anywhere that safety is prioritized in either case.


It’d be another story if Altman were one of the tech influencers who goes around saying that AI isn’t dangerous at all and you’re crazy if you have concerns. But he consigned the human extinction letter! And 20% of OpenAI compute was reportedly allocated to Sutskever’s superalignment team (https://openai.com/blog/introducing-superalignment). From what we know, it’s hard to see how this action was supposed to advance AI safety in any meaningful way.


Can someone please define "safety"? I keep hearing this but could you clarify what that means in practical terms? Is that why there's a "BasedGPT"?


Basically, it amounts to working around the issue that AI has no morals or code of ethics and is unconstrained from certain human limitations, such as processing speed and replication speed. It thinks different than we do, and these differences will only become more pronounced as capability grows by orders of magnitude.


Neither do search engines really. Like why does ChatGPT have to be considered so different from that when you could literally do all the same prompts in the form of searches and many search engines do the same thing at the very top as a quasi-summary? All that is required is to "string them together"


AI doesn't think, it is just a fancy madlibs engine.


Please take a break from posting this dumb crap and actually think about the issue for half a minute. This repeated crap is just a rehash of "Machines can never do X better than people so why worry", well, because shit gets shaken up when machines do X better, faster, and cheaper.


Generative AI, by definition, produces statisticially average output.

Humans produce wildly different output, including lots of statistical outliers on both tails of the distribution.

From an economic point of view, all the value creation happens precisely at the tails, where generative AI can't function.


Is that why it’s acing all the standardized tests with 90th percentiles? Because it’s average? Should be 50% no?


LLMs are way above 50, and that's not even looking at ideas behind specialized networks that are focused on particular training.

A world where 9 out of 10 people aren't ever going to catch up to the AI specialist submodule is going to have a lot of problems distributing wealth.


> Because it’s average?

Yes.

> Should be 50% no?

No. It's giving the statistically average correct answer, which it already knows because it has been trained on answer books for these standardized tests.


Bottom 90% doesn't count as human in top brain communities like this


Why would an adversarial network be bound to the average?

From an economic point of view your ideas where a few people make a trillion dollars and the rest struggle to find a means to eat are not valuable, unless you're selling the implements of war and strife. You just end up with a crapsack planet where the rich tell the poor the socialism is bad.


Any neural network is just maximizing a likelihood function under the hood.


Would you mind making your point in civilized language?

Apart from that, I think everyone has heard your argument as well as the one you are responding to very often by now.


Isn't AI probabalistic, to what exent is thinking also probabalistic?


It’s meant literally. Everyone involved in this story, both Altman and the people who did the coup, agrees that AI is dangerous in the same way that pandemics and nuclear bombs are dangerous.


A lot of HN chatter (and I'm inclined to agree) believe a number of those people involved are pro-safety to defend their moat and promote regulatory capture.

There's a very large contingent of ML researchers who think any idea of AI extinction risk is foolish because we don't have any evidence that intelligence equals compute. I've yet to see a single person give evidence that these two things are equal. What's more, the missing ingredient in any of these AGI extinction scenarios is desire (desire to act, desire to be, desire to love, to kill, etc.) and if you thought there was a paucity of evidence for intelligence = compute just wait till you see the evidence for transformers showing evidence of desire.

There's none. Not a shred of evidence. As ever, it's other human beings that are our greatest extinction risk.


Before the first nuclear test, they ran a calculation to make sure that there's no risk of the whole atmosphere chain-reacting and the world ending. The guy who did it said he was like 96% confident in that it won't happen. And they went with it anyway. Took a 4% risk of blowing up Earth.

Is this a reasonable chance to take?

"We don't have any evidence that intelligence equals compute" is worse than "we're 96% confident it doesn't". Ilya Sutskever clearly believes this is a real risk (otherwise he wouldn't have thrown his reputation and wealth to the wind by firing his cofounder yesterday). He is one of the foremost experts. So are two of the "fathers of AI", Hinton and Bengio, who both have no interest in creating a moat and yet signed letters saying "this is an extinction risk, lets treat it like nuclear", one of them quitting his job to be able to say it.

We don't have much evidence for or against, but that's not a great argument against "if it's true, we all die".


I don't think the comparison is fair because chemical reactions and how they work was a part of the model of the theory that nuclear testing might blow up the world. We were dealing with real, testable models of the world all the way up to the nuclear tests.

This is more like, if I smash two fish together in the Large Hadron Collider there's a chance the universe ends. Which no one would tell you with a straight face was remotely possible. But! It's never been done before, so, possible?

We're human beings. One of the things humans do is project. We do it all the time. To the entire animal kingdom, to the gods of our theologies, you get where I'm going with this?

Well now our gods have come to visit us and much like the gods of our fantasies, they're proxies, a mirror to the best and worst of ourselves. Because we still largely operate on fear, we have a camp of end-of-the-worlders who believe these gods are ready to judge us guilty and murderbot us all.

Imagine if we were a species where fear was not the dominant feeling. We'd in all likelihood imagine transformers were harbingers of something else entirely.


Intelligent agents that don't share your worldview can be dangerous. We've seen this a hundred times (e.g. Cortez), I don't think it's controversial or "not part of our model".

"The LLM research project might lead to intelligent agents that don't share our worldview" should not be too controversial either?

You could argue about the odds, but it's very explicitly the stated goal, and they came orders of magnitude than any before them, and most of the founders of the field are doing things like quitting their prestigious jobs or throwing billions of dollars of investor money in their investors' faces, that they have nothing to personally gain from, for the stated reason of trying to prevent this outcome. So we should probably defer to the experts and assume, say, at least 1% that it's a real risk? 10%? It's not fish in the LHC, it's part of one not unreasonable model of what might happen as a result of people spending billions of dollars trying to make it happen.

Edit: I see you've made an important edit. Are we still arguing whether it's possible that OpenAI will succeed in creating intelligent autonomous agents, or only about whether we should fear that eventuality?


> What's more, the missing ingredient in any of these AGI extinction scenarios is desire

Wasn't that point already refuted by the paperclip maximizer thought experiment long before LLMs became a thing?

I mean, we kind of already see the effects of autonomous agents on the world in the form of companies maxizing resource extraction for profit. I don't feel this is going to end well, either (tangent, I also don't feel that life constantly getting better because of technology either).

The missing ingredient for benevolent autonomous agents is a purpose, e.g. the well-being of humans.


> Wasn't that point already refuted by the paperclip maximizer thought experiment long before LLMs became a thing?

I mean, no, because Bostrom hypothesized that an AI given the task of maximizing paperclips would consume all life in its quest to produce paperclips but there's just so many assumptions in this chain of thought, it's not credible (to me).

Here's one portion of Bostrom's quote:

> The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off.

So Bostrom imagines the AI realizing? Ok, hold on there chief, there's about a few hundred priors you haven't addressed.

Better off without humans because humans might switch it off? So now we're talking about human feelings here. Emotions. Fear. There's my other comment down below that addresses that.

As long as we keep projecting our fears and emotions onto transformers we are no further along to understanding the extinction risk of LLMs than soothsayers rooting around in the entrails of dead animals are to understanding the outcome of battles.


I see your point, but the thought experiment still claims that AI risk does not require consciousness, "malicious intent" or a "breakout" of the machine in some sense, no?

All of the PCM scenario - including the part you quoted - are described as a means to the ulterior goal (paperclips). Self-preservance just happens to become a logical part of if. The preconditions for an existential risks are not explained by the thought experiment, it just says that there's no need to assume "consious decisions" or consciousness for it, by providing a counterexample.

So you might disagree with this, but the PCM thought experiment does not assume pain, feelings etc in AI.

Of course this is only a thought experiment, and it assumes literal AGI.

I'm also not arguing that LLMs are AGI.

What I wanted to point out is already mentioned by many others: existential risk (or great harm) doesn't need a suprising "conscious decision", or consciousness.

I mean, machines can cause unpredictable harm already if not operated correctly, all without AI.


I fan absolutely believe this. I'm interested in what BigMentalHealth /s feels about LLMs that can be trained off a knowledgebase like CBT/DBT/IFS and other modalities that are basically defined and representable

You can't insist on a Monopoly where much of the population can't access you (cost/coverage) and caveat that with if they can't get help they have to hurt/cause more destruction/die.

Not crazy about the structure here but you get the meaning.


Honestly inclined to agree. Not to question people's ethics unduly, it sounds exactly like the same logic we use to "protect" drug users from drugs by destroying their lives in advance and inducting them into the criminal justice system as a manner of measured due course. Makes sense, yup yup yup. We know what's best for you.

ChatGPT doesn't do anything you can't already get by searching Brave or Kagi. Just a little more human-centric given its chat formst


I don’t think it’s controversial at all to say that drugs can be dangerous in a way that requires protecting users. You may not agree with carceral drug policies (as you don’t have to agree with every conceivable AI regulation), but would you argue that Purdue Pharma did nothing wrong?


We don't protect users tho. We make them hit the streets and subject themselves to wacky unregulated clandestine chemistry or doctor shop to get what they want or need exposing them to hefty criminal liabillity.

People using drugs the "wrong way" are bound but not protected, people using the "right way" are protected but not bound.


I know because I've been both, not so much these days

Edit: i, too, enjoy being protected, not bound as I like my drinks Stirred, not shaken


By that logic alcohol and tobacco gotta go wholesale. They are the most destructive and addictive and its not just because they are the only legal options. The mental dichotomy people make in differentiating alcohol/tobacco and DRUGS is asinine. They are the worst and they point out how ridiculous the entire regime is. Its also annoying as fuck there are moron cops setting medical and pharmaceutical manufacturing policy at the national or any level.


Might as well throw caffeine (coffee in particular) into that barrel as well if we are talking about addictive substance many of us casually use daily.


The point is its ridiculous to be wading in all this arbitrary nonsense. Why the heck else would Jefferson have been growing poppies at Monticello if he didn't intend for people to be able to grow and possess opium. Thats some 20th century bullshit after they screwed the pooch with alcohol prohibition TWICE, no?



Are they in any way serving as a bottleneck to the widespread access to LLMs/"AI" for use in a self-determined self-therapeutic context?

If they are they need to fück off


No. They can act on safety concerns AND retain their market edge. I am just saying that firing Sam, sacrificing their edge with the resulting fallout, is not a good way of acting on safety concerns.


So either doomers are wrong about the technology, or we’re doomed. There’s no universe where doom is possible but for policy decisions and we aren’t doomed.


The Fermi paradox seems to point at us being doomed, why we don't see probes disassembling the universe is the part I'm still confused on.


"Acting on safety" is a continuum, not a binary thing. Same with market-edge.


> OpenAI’s commercial success

Doesn't it lose money on gpt-4 usage? Or at least on the chatgpt side? It reminds me of all the startups that price unsustainably low until they "win" then start really charging, then start to slowly decay.


Are you questioning GP's mythology with facts and properly-done research?


Please keep these Reddit meme responses on Reddit.


same logic was used in ww2


One is a weapon of unimaginable destructive power, the other is matrix multiplication.


Disengenuous in the extreme. If AI is just matrix multiplication, than a nuke is just lots of light emitted at once.


You've heard the term "The pen is mightier than the sword" right?

AI is it's own damned pen.

All were waiting for is some dumbass to give it terminal goals.


It's also the logic of nuclear weapons and MAD.


It was much more sensible in that case.

Here, it's not clear that too companies with a superintelligent AI is really any less bad than one.


We (including the OpenAI board) are trying to keep it at zero.


I wouldn't mind this linkage being a little more pointed.

How is growth, thus being bigger and more in control of your own outcomes, linked to WW2.

If Germany had higher growth rate, then maybe they wouldn't have swung right wing?

Or are you saying that by 'appeasing' the 'growth' side, that is similar to WW2 appeasement strategy? So if we hamper growth, that gives more control?


It’s not growth of the economy, it’s growth of the successor to humanity…

Apples and oranges…


Ok. What is link to WW2?


Please explain.


"If we don't build the atomic bomb, the Germans will beat us to it."

(I assume)


that's just flawed logic. eventually, everyone will catch up regardless.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: