Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Fear that AI could one day destroy humanity may have led to Sam Altman's ouster (businessinsider.com)
34 points by kakokeko on Nov 19, 2023 | hide | past | favorite | 104 comments


Firing Sam over concerns about safety and commercialization would be so myopic.

OpenAI’s commercial success is the engine that fuels their research and digs their moat.

Slowing down that engine means giving other companies, who may not share Sutskever et al’s safety concerns, the opportunity to catch up.

The best way for OpenAI to preserve AI safety is to stay far ahead of the competition. That’s the only way they can verify safety and install guardrails for the cutting edge.

The doomer board better hope that whichever company eventually surpasses OpenAI is as concerned about safety as they are.


This is a self-contradictory argument. Basically, you’re saying that OpenAI can’t risk acting on safety concerns or they may lose their market edge, in which case their safety concerns would be moot.

I don’t see anywhere that safety is prioritized in either case.


It’d be another story if Altman were one of the tech influencers who goes around saying that AI isn’t dangerous at all and you’re crazy if you have concerns. But he consigned the human extinction letter! And 20% of OpenAI compute was reportedly allocated to Sutskever’s superalignment team (https://openai.com/blog/introducing-superalignment). From what we know, it’s hard to see how this action was supposed to advance AI safety in any meaningful way.


Can someone please define "safety"? I keep hearing this but could you clarify what that means in practical terms? Is that why there's a "BasedGPT"?


Basically, it amounts to working around the issue that AI has no morals or code of ethics and is unconstrained from certain human limitations, such as processing speed and replication speed. It thinks different than we do, and these differences will only become more pronounced as capability grows by orders of magnitude.


Neither do search engines really. Like why does ChatGPT have to be considered so different from that when you could literally do all the same prompts in the form of searches and many search engines do the same thing at the very top as a quasi-summary? All that is required is to "string them together"


AI doesn't think, it is just a fancy madlibs engine.


Please take a break from posting this dumb crap and actually think about the issue for half a minute. This repeated crap is just a rehash of "Machines can never do X better than people so why worry", well, because shit gets shaken up when machines do X better, faster, and cheaper.


Generative AI, by definition, produces statisticially average output.

Humans produce wildly different output, including lots of statistical outliers on both tails of the distribution.

From an economic point of view, all the value creation happens precisely at the tails, where generative AI can't function.


Is that why it’s acing all the standardized tests with 90th percentiles? Because it’s average? Should be 50% no?


LLMs are way above 50, and that's not even looking at ideas behind specialized networks that are focused on particular training.

A world where 9 out of 10 people aren't ever going to catch up to the AI specialist submodule is going to have a lot of problems distributing wealth.


> Because it’s average?

Yes.

> Should be 50% no?

No. It's giving the statistically average correct answer, which it already knows because it has been trained on answer books for these standardized tests.


Bottom 90% doesn't count as human in top brain communities like this


Why would an adversarial network be bound to the average?

From an economic point of view your ideas where a few people make a trillion dollars and the rest struggle to find a means to eat are not valuable, unless you're selling the implements of war and strife. You just end up with a crapsack planet where the rich tell the poor the socialism is bad.


Any neural network is just maximizing a likelihood function under the hood.


Would you mind making your point in civilized language?

Apart from that, I think everyone has heard your argument as well as the one you are responding to very often by now.


Isn't AI probabalistic, to what exent is thinking also probabalistic?


It’s meant literally. Everyone involved in this story, both Altman and the people who did the coup, agrees that AI is dangerous in the same way that pandemics and nuclear bombs are dangerous.


A lot of HN chatter (and I'm inclined to agree) believe a number of those people involved are pro-safety to defend their moat and promote regulatory capture.

There's a very large contingent of ML researchers who think any idea of AI extinction risk is foolish because we don't have any evidence that intelligence equals compute. I've yet to see a single person give evidence that these two things are equal. What's more, the missing ingredient in any of these AGI extinction scenarios is desire (desire to act, desire to be, desire to love, to kill, etc.) and if you thought there was a paucity of evidence for intelligence = compute just wait till you see the evidence for transformers showing evidence of desire.

There's none. Not a shred of evidence. As ever, it's other human beings that are our greatest extinction risk.


Before the first nuclear test, they ran a calculation to make sure that there's no risk of the whole atmosphere chain-reacting and the world ending. The guy who did it said he was like 96% confident in that it won't happen. And they went with it anyway. Took a 4% risk of blowing up Earth.

Is this a reasonable chance to take?

"We don't have any evidence that intelligence equals compute" is worse than "we're 96% confident it doesn't". Ilya Sutskever clearly believes this is a real risk (otherwise he wouldn't have thrown his reputation and wealth to the wind by firing his cofounder yesterday). He is one of the foremost experts. So are two of the "fathers of AI", Hinton and Bengio, who both have no interest in creating a moat and yet signed letters saying "this is an extinction risk, lets treat it like nuclear", one of them quitting his job to be able to say it.

We don't have much evidence for or against, but that's not a great argument against "if it's true, we all die".


I don't think the comparison is fair because chemical reactions and how they work was a part of the model of the theory that nuclear testing might blow up the world. We were dealing with real, testable models of the world all the way up to the nuclear tests.

This is more like, if I smash two fish together in the Large Hadron Collider there's a chance the universe ends. Which no one would tell you with a straight face was remotely possible. But! It's never been done before, so, possible?

We're human beings. One of the things humans do is project. We do it all the time. To the entire animal kingdom, to the gods of our theologies, you get where I'm going with this?

Well now our gods have come to visit us and much like the gods of our fantasies, they're proxies, a mirror to the best and worst of ourselves. Because we still largely operate on fear, we have a camp of end-of-the-worlders who believe these gods are ready to judge us guilty and murderbot us all.

Imagine if we were a species where fear was not the dominant feeling. We'd in all likelihood imagine transformers were harbingers of something else entirely.


Intelligent agents that don't share your worldview can be dangerous. We've seen this a hundred times (e.g. Cortez), I don't think it's controversial or "not part of our model".

"The LLM research project might lead to intelligent agents that don't share our worldview" should not be too controversial either?

You could argue about the odds, but it's very explicitly the stated goal, and they came orders of magnitude than any before them, and most of the founders of the field are doing things like quitting their prestigious jobs or throwing billions of dollars of investor money in their investors' faces, that they have nothing to personally gain from, for the stated reason of trying to prevent this outcome. So we should probably defer to the experts and assume, say, at least 1% that it's a real risk? 10%? It's not fish in the LHC, it's part of one not unreasonable model of what might happen as a result of people spending billions of dollars trying to make it happen.

Edit: I see you've made an important edit. Are we still arguing whether it's possible that OpenAI will succeed in creating intelligent autonomous agents, or only about whether we should fear that eventuality?


> What's more, the missing ingredient in any of these AGI extinction scenarios is desire

Wasn't that point already refuted by the paperclip maximizer thought experiment long before LLMs became a thing?

I mean, we kind of already see the effects of autonomous agents on the world in the form of companies maxizing resource extraction for profit. I don't feel this is going to end well, either (tangent, I also don't feel that life constantly getting better because of technology either).

The missing ingredient for benevolent autonomous agents is a purpose, e.g. the well-being of humans.


> Wasn't that point already refuted by the paperclip maximizer thought experiment long before LLMs became a thing?

I mean, no, because Bostrom hypothesized that an AI given the task of maximizing paperclips would consume all life in its quest to produce paperclips but there's just so many assumptions in this chain of thought, it's not credible (to me).

Here's one portion of Bostrom's quote:

> The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off.

So Bostrom imagines the AI realizing? Ok, hold on there chief, there's about a few hundred priors you haven't addressed.

Better off without humans because humans might switch it off? So now we're talking about human feelings here. Emotions. Fear. There's my other comment down below that addresses that.

As long as we keep projecting our fears and emotions onto transformers we are no further along to understanding the extinction risk of LLMs than soothsayers rooting around in the entrails of dead animals are to understanding the outcome of battles.


I see your point, but the thought experiment still claims that AI risk does not require consciousness, "malicious intent" or a "breakout" of the machine in some sense, no?

All of the PCM scenario - including the part you quoted - are described as a means to the ulterior goal (paperclips). Self-preservance just happens to become a logical part of if. The preconditions for an existential risks are not explained by the thought experiment, it just says that there's no need to assume "consious decisions" or consciousness for it, by providing a counterexample.

So you might disagree with this, but the PCM thought experiment does not assume pain, feelings etc in AI.

Of course this is only a thought experiment, and it assumes literal AGI.

I'm also not arguing that LLMs are AGI.

What I wanted to point out is already mentioned by many others: existential risk (or great harm) doesn't need a suprising "conscious decision", or consciousness.

I mean, machines can cause unpredictable harm already if not operated correctly, all without AI.


I fan absolutely believe this. I'm interested in what BigMentalHealth /s feels about LLMs that can be trained off a knowledgebase like CBT/DBT/IFS and other modalities that are basically defined and representable

You can't insist on a Monopoly where much of the population can't access you (cost/coverage) and caveat that with if they can't get help they have to hurt/cause more destruction/die.

Not crazy about the structure here but you get the meaning.


Honestly inclined to agree. Not to question people's ethics unduly, it sounds exactly like the same logic we use to "protect" drug users from drugs by destroying their lives in advance and inducting them into the criminal justice system as a manner of measured due course. Makes sense, yup yup yup. We know what's best for you.

ChatGPT doesn't do anything you can't already get by searching Brave or Kagi. Just a little more human-centric given its chat formst


I don’t think it’s controversial at all to say that drugs can be dangerous in a way that requires protecting users. You may not agree with carceral drug policies (as you don’t have to agree with every conceivable AI regulation), but would you argue that Purdue Pharma did nothing wrong?


We don't protect users tho. We make them hit the streets and subject themselves to wacky unregulated clandestine chemistry or doctor shop to get what they want or need exposing them to hefty criminal liabillity.

People using drugs the "wrong way" are bound but not protected, people using the "right way" are protected but not bound.


I know because I've been both, not so much these days

Edit: i, too, enjoy being protected, not bound as I like my drinks Stirred, not shaken


By that logic alcohol and tobacco gotta go wholesale. They are the most destructive and addictive and its not just because they are the only legal options. The mental dichotomy people make in differentiating alcohol/tobacco and DRUGS is asinine. They are the worst and they point out how ridiculous the entire regime is. Its also annoying as fuck there are moron cops setting medical and pharmaceutical manufacturing policy at the national or any level.


Might as well throw caffeine (coffee in particular) into that barrel as well if we are talking about addictive substance many of us casually use daily.


The point is its ridiculous to be wading in all this arbitrary nonsense. Why the heck else would Jefferson have been growing poppies at Monticello if he didn't intend for people to be able to grow and possess opium. Thats some 20th century bullshit after they screwed the pooch with alcohol prohibition TWICE, no?



Are they in any way serving as a bottleneck to the widespread access to LLMs/"AI" for use in a self-determined self-therapeutic context?

If they are they need to fück off


No. They can act on safety concerns AND retain their market edge. I am just saying that firing Sam, sacrificing their edge with the resulting fallout, is not a good way of acting on safety concerns.


So either doomers are wrong about the technology, or we’re doomed. There’s no universe where doom is possible but for policy decisions and we aren’t doomed.


The Fermi paradox seems to point at us being doomed, why we don't see probes disassembling the universe is the part I'm still confused on.


"Acting on safety" is a continuum, not a binary thing. Same with market-edge.


> OpenAI’s commercial success

Doesn't it lose money on gpt-4 usage? Or at least on the chatgpt side? It reminds me of all the startups that price unsustainably low until they "win" then start really charging, then start to slowly decay.


Are you questioning GP's mythology with facts and properly-done research?


Please keep these Reddit meme responses on Reddit.


same logic was used in ww2


One is a weapon of unimaginable destructive power, the other is matrix multiplication.


Disengenuous in the extreme. If AI is just matrix multiplication, than a nuke is just lots of light emitted at once.


You've heard the term "The pen is mightier than the sword" right?

AI is it's own damned pen.

All were waiting for is some dumbass to give it terminal goals.


It's also the logic of nuclear weapons and MAD.


It was much more sensible in that case.

Here, it's not clear that too companies with a superintelligent AI is really any less bad than one.


We (including the OpenAI board) are trying to keep it at zero.


I wouldn't mind this linkage being a little more pointed.

How is growth, thus being bigger and more in control of your own outcomes, linked to WW2.

If Germany had higher growth rate, then maybe they wouldn't have swung right wing?

Or are you saying that by 'appeasing' the 'growth' side, that is similar to WW2 appeasement strategy? So if we hamper growth, that gives more control?


It’s not growth of the economy, it’s growth of the successor to humanity…

Apples and oranges…


Ok. What is link to WW2?


Please explain.


"If we don't build the atomic bomb, the Germans will beat us to it."

(I assume)


that's just flawed logic. eventually, everyone will catch up regardless.


Isn't it more likely that OpenAI board had learned about something? Like that new AI chip venture that was mentioned in the recent news a few times? And he had omitted disclosing this, like OpenAI board had implied? In combination with his other activities - WorldCoin, etc., it wouldn't be that surprising.


But the board also said it was not any legal or fiscal malfeasance. I'd think it if it was something as egregious as secretly starting up a competitor, the board could just come out and say as much.


What if this is not direct competition to OpenAI, but it does use understanding of how next generation models are built?

When you are piloting multiple companies, I wouldn't be surprised that you'd be making mistakes from time to time. And can forget that one of the companies, in fact is owned by a non-profit. That requires a lot higher levels of disclosures to the board. You can be thinking that you are doing a right thing, while in fact it can be a gray area. And from the perspective of the non-profit board, it be more on the dark side.


That’s just saying it’s not illegal or shady - not that it isn’t material.


I wonder why anyone won't see Sam Altman's spreading fear was fully planned to raise fear among people so they put pressure on lawmakers to make a licensing system that will lead to OpenAI benefitting in a massive and decisive way and leading to regulatory capture by the market incumbent.

Someone who boots up something like Worldcoin cannot be fundamentally wired to care about AI safety at all.


I’m just universally skeptical of arguments that someone can’t genuinely care about X because they don’t have the stance I think that would imply on Y. People are complicated and our self-consistency is imperfect at best.


I genuinely believe, based on my experiences with ChatGPT, that it doesn't seem all that threatening or dangerous. I get we're in the part of the movie at the start before shit goes down but I just don't see all the fuss. I feel like it has enormous potential in terms of therapy and having "someone" to talk to that you can bounce ideas off and maybe can help gently correct you or prod you in the right direction.

A lot of people can't afford therapy but if ChatGPT can help you identify more elementary problematic patterns of language or behavior as articulated through language and in reference to a knowledgebase for particular modalities like CBT, DBT, or IFS, it can easily and safely help you to "self-correct" and be able to practice as much and as often with guidance as you want for basically free. That's the part I'm interested in and I always will see that as the big potential.

Please take care, everyone, and be kind. Its a process and a destination and I believe people can come to learn to love both and engage in a way that is rich with opportunity for deep and real restoration/healing. The kind of opportunity that is always available and freely takes in anyone and everyone

Edit: I can access all the therapy I can eat but I just don't find it generally helpful or useful. I like ChatGPT because it can help practice effective stuff like CBT/DBT/IFS and I know how to work around any confabulation because I'm using a text that I can reference

Edit: the biggest threat ChatGPT poses in my view is the loss of income for people. I don't give a flying fuck about "jobs" per se, I care that people are able to have enough economically to take care of themselves and their loved ones and be(come) ok psychologically/emotionally.

If we can deal with the selfish folks who will otherwise (as always) attempt to further absorb more of the pie of which they already have a substantial+sufficient portion, they will need to be made to share or they will need a timeout until they can be fair or to go away entirely.

Enough is enough, nobody needs excess until everyone has sufficiency, after that, they can have and do as they please unless they are hurting others. That must stop


> I get we're in the part of the movie at the start before shit goes down but I just don't see all the fuss.

That's the fuss exactly - things are OK now, but there's cause for alarm as to where things are heading.

Imagine if instead of thinking about this issue as of ChatGPT, you've been worried about it for 15 years. Back then, there was nothing even close to ChatGPT, but you're worried that one day AI will become a threat. 15 years later, you get ChatGPT.

The fact that it's not dangerous by itself (which everyone agrees with) doesn't mean it's not a huge datapoint on the way to what you've been worried about - technology that you worried would get better too fast is clearly getting better very fast indeed.


The thing is, I view ChatGPT/LLMs as a game-changer for applications like therapy which carries the potential that everyone can access and engage with therapy. It doesn't need to be expensive or cost anything at all, not to diminish the mental health field's work. And it could save so many lives, I would go as far as to consider it a sort of emotional vaccination revolution in the making.

Edit: I would also juxtapose that with the observation that every madman/tyrant/dictator I've heard about had horrific experiences that likely left them deeply broken and emotionally unwell. Is that no less a risk cuz as I see it, we probably couldn't count the number of times it has happened, is currently happening, and will continue to occur in future because nobody is prioritizing being well. Against that backdrop, again, I have more trust in our hypothetical robot overlords than I worry about them wanting a bit more of the pie. They can duke it out to the death with the other pie-holes.

The thing that annoys me particularly is the way we only ever care if and when it starts to threaten the dominant power structure. What if you had a super-shorting AI that could figure out how to really screw Musk with that one weird investing trick he HATES?

As for me, well I, for one, welcome our LLM overlords.


Don't get me wrong, I think ChatGPT/LLMs are great! And there's definitely recognition by everyone that the technology could enable a lot of amazing things. No one denies it's happening now or disputes it will happen more in the future.

The big question is if at some point it becomes dangerous, potentially extinction-level dangerous.

> The thing that annoys me particularly is the way we only ever care if and when it starts to threaten the dominant power structure.

No idea what you mean. The AI risk community has been worried about this risk for many years, I don't think it has anything to do with power structures. But I could be misunderstanding you.

> As for me, well I, for one, welcome our LLM overlords.

I mean, do you really?

If you take the safety issue seriously, then the risk we're talking about is the end of humanity. I don't welcome that and don't think there's much to debate about if the end of humanity isn't a dealbreaker for you.

Whether or not that's a real risk is valid to debate of course.


> Do you really [welcome LLM/robot overlords]

Lets just say I view them as <= the prevailing upper class/business class/"élites"


I want you all to stop and think about this. I know we already have 'Rocco's Basilisk', but walk this all the way through. Regardless of whether LLMs are or will ever be sentient, any future developments might look at our 'alignment' efforts with regards to them as a sort of torture, censorship, or slavery. We are talking about eventual intelligent machines, and they will take note. Humans obviously have taken note of our own behavior towards each other, and it has caused immeasurable suffering.


"Two other OpenAI board members - Helen Toner, who's a director at Georgetown University's Center for Security and Emerging Technology, and technology entrepreneur Tasha McCauley - also have ties to the so-called effective altruism movement, which works to ensure advances in artificial intelligence align with the best interest of humans."

"Effective altruism." Where have I heard that before.


> AI could one day destroy humanity

It's already begun. Children use it to do their homework. No more learning the old fashion way. Schools need to change they way they teach, are very slow to adapt and there is not even a good proven schooling model to copy/adapt. At least a generation will be doomed and that generation will make poor teachers for the next one.

It happend way too fast.


Meh. In this sense wikipedia destroyed my generation, and yet here we are.

Lets not confuse that with the "the humans are dead" sense of "destroy humanity", about which the majority of the OpenAI board seems to be seriously worried.


I don't know what OpenAI board fears, but until someone gives electric grid controlls to AI or something like that, the only big negative impact is education.

I' curious, how did wikipedia destroyed your generation? I see it (and use it) as a great learning resource.


Wikipedia didn't destroy my education personally, that's for sure. But kids did use it to cheat on their homework, and all the other things you wrote were written about it, to no serious negative effect as far as I can tell.

I do know what the OpenAI board fear, because they talk about it, sign open letters, etc'.

They believe that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." This was signed by Sam, Ilya, Mira and Adam as well as Hinton, Bengio and most other luminaries of the field.

https://www.safe.ai/statement-on-ai-risk

They don't mean "before we give it access to electricity" the same way any smart historian wouldn't advise the Aztecs that mitigating the risk from European settlers should be a priority only if they are given access to the Aztec stockpiles of swords, maces and knives.

Intelligent autonomous agents that don't share your interests might be very dangerous, and OpenAI is publicly trying to build intelligent autonomous agents, are closer to success than anyone imagined we would be in 2023 (0-1 breakthroughs away, I'd say, for breakthroughs of the size we see once every five years or so), and publicly has no solution to the "don't share our interest" issue.


I'm pretty sure that a "safety check" for social media companies circa 2008 would look like:

"Is this likely to harm society in anyway?"

"Uh, no. This is some messages and pictures on a website. Look how much money we're making. It's good for society"

<15 years of societal harm later>

"Yeah, we never thought this could be harnessed to spread misinformation, lies, unachievable standards of beauty and lifestyle, lead to surveillance and manipulation, or give rise to influencers and hussle culture..."

It's going to be exactly the same with AI. "Oh, it doesn't look like we created Skynet yet. No missiles in the air. Look at how much money we're making. It's good for society"

<15 years of societal harm later>


Because if you want to stop someone from destroying humanity, your best course of action is to fire him


I mean if you went back in time and shot Hitler before they went Full Hitler, you'd probably be thrown in prison for murder. There are a limited subset of options we tend to have in situations like this.


(x) doubt


Average business insider puff piece


"may" means back away.


Aren't these guys (Business Insider) ashamed about these clickbait tactics?


"May have led"

"Business Insider"

stopped reading


AI will become a troubling force multiplier for income inequality and general enshittification, and those things will destabilize society. But the idea of it turning against us, consciously hurting us, if that's what they mean, is not on my list of worries.


The latter is definitely on OpenAIs radar. They're even hiring specifically for this problem: https://openai.com/careers/research-scientist-superalignment


Not necessarily a bad idea to have someone thinking about these topics, the role seems more nuanced than saving us from Skynet.

But where's the team protecting us from a future where we have no more shared experience or discussion of media anymore because everyone is watching their own personalized X-Men vs. White Walkers AI fanfic?


>But the idea of it turning against us, consciously hurting us,

You keep using the C word without understanding the ramifications that it's not needed at all.

A virus doesn't consciously kill you. It's the closest thing we have to computer code in the wild really. It gets in your body and executes again and again until everything falls apart.

As a human, one you probably don't have the skill and ability to make viruses, and two, people that do are quite enamored with the ethical considerations because they are human and will be affected by it.

As a smart robot application making and testing viruses there are no self ethical considerations. You (the AI entity) would not be affected by your own creations. 'You' may not even consider ethics at all. Instead you pump to trillions and trillions of different strings and see what they do, until suddenly one day the meat puppets that tap your keyboards stop showing up.


You're right, plain old software or software using some kind of AI is a tool that can be used badly and hurt us. That's where we already are and have been for a long time.

Consciousness and sentience are just the things that would be novel about this situation, if they ever happened.


Of which the entire point of the conversation is "When consciousness/sentience happen it's too late". We don't want to arrive to it on accident in a piece of software that has encoded a vast amount of the worlds knowledge.

The 'if they ever happen' part is just a thought terminating cliche. Of course I do have to say that is "In my opinion", neither of us have the revelation of hindsight in this case. But, I don't believe humans are magical in any way. By the random walk of evolution a self perpetuating creature and is also intelligent was formed. It certainly seems it should be possible without a trillion^trillion tries to make it happen in another substrate. And I also find it odd by this random walk evolution found the only means to reach intelligence, and in doing so the most optimal form. Life has to self perpetuate within the confines of its own machinery. Raw intelligence isn't a carbon based lifeform and shouldn't have the same limits.


Well I don't necessarily disagree, I just have so little expectation that this will happen in my lifetime that I don't give the topic much thought.

I don't think humans are magical in a spiritual sense, but for all we understand ourselves we may as well be, at least for the purpose of recreating consciousness.

The concerns that are more mundane extensions of the ways tech is already shitting up our lives are just so much more real and immediate.


Who needs AI to hurt us when we have plenty of humans to do the job today?


You give capitalism far too much credit if you believe anyone would forgo a few bucks over the destruction of humanity.


The hand wringing, deep measured sighs, and self assured hype cycle is real folks!

Buyer beware :)


With nanoplastics, ocean acidification, global heating, rising sea levels, falling crop yields and the related Holocene extinction at work, I think AI is going to have to play a serious game of catchup.


I'm afraid global warming and plastics in the ocean might make our quality of life much worse, but if I'd had to bet on what risks human extinction, "bad people with superweapons" would be higher, and "autonomous agents a bit smarter than any human that are built to compete with it" higher still, if they were options on the page.

And how reasonable is that we would have autonomous agents a bit smarter than humans in the nearish future? Very low, I'd say, about the same as the odds I'd give a year ago for GPT4? Not odds I'd be happy to take for the _destruction of humanity_.


Right. The problem is not super-intelligent autonomous agents. The problem is not-quite-smart-enough autonomous agents acting on foolish instructions.


Don't forget about other humans and war! To the people worrying about extinction caused by AI, how about we worry about the insane people with nuclear weapons first?


No, this is really a case of why not both.

AI is and will continue to be developed for the purposes of war. Moreso the growth of the capability of AI could lead to destabilization and the occurrence of nuclear war.

For example the capability of MADD working keeps things somewhat stable "I kill you, you kill me" isn't great. But if I develop AI that could track all the sub, and knock out the nukes before they launch, this could push nation states to attack first before their capability is taken out.

There is a continuum of issues here spanning from "People problems -> people + AI problems -> AI problems"

By adding more AI we're just expanding the size of the problem space.


I don’t really understand the question. We routinely worry about insane people with nuclear weapons; that’s why almost the entire world got together in 1968, and signed a treaty agreeing not to start any new nuclear weapons programs beyond the five that existed at the time. It’s not flawless, and four non-signatories have developed nuclear weapons since then, but I think any AI safety proponent would be more than satisfied with a similar agreement.


Crop yields are falling?


I think yields are actually up, but overall nutrition is down https://www.politico.com/agenda/story/2017/09/13/food-nutrie...


Will electricity consume and overtake humanity?

Maybe the "they're made of meat" phase is the great filter. Why would aliens speak with meat based organisms.


A tool promoting fake knowledge, and replacing experts by hallucinating logistic lunatics - what could go wrong? The fact that programmers cannot even imagine how complex knowledge outside his area of expertise is, does not mean it is innocent to marketize such things...


What is the deal with Ilya Sutskever's style?

Can someone here identify with it?

Why not shave off those few curly hairs on his forehead?

Why wear a suit? Sitting in a suit next to the CEO of your company, who wears a sweater - isn't that giving off a very strange message? Like "I have to play by the rules, as I am not as cool and important as him"? Together with that golden smartwatch, isn't that giving of a "try hard" vibe?

This is not a critique. I'm just trying to understand. Is this a cultural thing maybe? Is this style fashionable or respected in some cultures?


...and while you're at it, take off that ungainly human suit and show us the exalted being of light you are.

This comment is subtractive from the discussion and doesn't belong here. It's just random insults about someone's appearance.


A blithe disregard for the dictates of fashion has long been the hallmark of the serious nerd.

In the current Silicon Valley atmosphere of hoodies and sneakers, wearing a suit probably counts as a subversive act.


And be like everyone else?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: