Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hadn’t OpenAI published all the key research results to repro ChatGPT? And made their model available to literally everyone? And contributed more than anyone else to AI alignment/safety?

To me it looks like nearly every other player, including open source projects are there for short term fame and profit, while it’s the OpenAI that is playing the long game of AI alignment.



> As another example, we now believe we were wrong in our original thinking about openness, and have pivoted from thinking we should release everything (though we open source some things, and expect to open source more exciting things in the future!) to thinking that we should figure out how to safely share access to and benefits of the systems.

https://openai.com/blog/planning-for-agi-and-beyond


Well gosh, what could be a safer way to share than giving access to people with money? Since we live in a meritocracy, we're basically guaranteed that anybody with money to spend is virtuous, and vice versa.


Excuse me, I have to go to the store to get a refill for my sarcasm detector. Your one comment completely emptied the tank.


You're not wrong, but existential threats and possible extinction is in the future; maybe ten-fifteen years away if we're lucky.

Meanwhile, we don't get to play with their models right now. Obviously that's what we should be concerned about.


Among all the accepted threats to humanity's future, AI is one of the least founded at this point. We all grew up with this cautionary fiction. But unless you know something everyone else doesn't, the near term existential threat of AI is relatively low.


What strong evidence of existential threat are you expecting to see before it's already too late to avoid catastrophe?


Once-in-a-century weather patterns happening multiple times per decade, that sort of thing.


That would make sense for climate change, but the context of this thread is a discussion about AI. Why would that be evidence of an existential threat from AI?


what I think they mean is; there are bigger tigers, and they're already in the house.

no sense wasting time stressing out about the cub at the zoo.


I do at least carry hope that climate change will have survivors in all but the worst-case scenarios. And I'm not sure which tiger will strike first.


Existential threat is a bit hyperbolic. Worst case scenario, lots of people might lose their jobs, and some will go hungry as we restructure our economy.

War with Russia is literally an existential threat.


Your worst-case scenario looks very close to my best-case scenario!


What strong evidence do you have that AGI (1) is possible in the foreseeable future, and (2) will be more a threat than any random human?


Why should we need AGI for AI to be more of a threat than a random human? We already have assassination bots that allow people to be killed from across the globe. Add in your average sociopathic corporation and a government that has a track record of waging wars to protect its economic interests, and AGI becomes no more than a drop in that same bucket.


(You didn't respond with an answer to my question, which discourages me from answering yours! But I'll answer any way in good faith, and I hope you will do the same).

(1) Possible in the forseeable future: The strongest evidence I have is the existence of humans.

I don't believe in magic or the immaterial human soul, so I conclude that human intelligence is in principle computable by an algorithm that could be implemented on a computer. While human wetware is very efficient, I don't think that such an algorithm would require vastly greater compute resources than we have available today.

Still, I used to think that the algorithm itself would be a very hard nut to crack. But that was back in the olden days, when it was widely believed that computers could not compete with humans at perception, poetry, music, artwork, or even the game of Go. Now AI is passing the Turing Test with flying colours, writing rap lyrics, drawing beautiful artwork and photorealistic images, and writing passable (if flawed) code.

Of course nobody has yet created AGI. But the gap between AI and AGI is gradually closing as breakthroughs are made. It seems increasingly to me that, while there are still some important, un-cracked nuts to the hidden secrets of human thought, they are probably few and finite, not as insurmountable as previously thought, and will likely yield to the resources that are being thrown at the problem.

(2) AGI will be more of a threat than any random human: I don't know what could count as "evidence" in your mind (see: the comment that you replied to), so I will present logical reasoning in its place.

AGI with median-human-level intelligence would be more of a threat than many humans, but less of a threat than humans like Putin. The reason that AGI would be a greater threat than most humans is that humans are physically embodied, while AGI is electronic. We have established, if imperfect, security practices against humans, but none tested against AGI. Unlike humans, the AGI could feasibly and instantaneously create fully-formed copies of itself, back itself up, and transmit itself remotely. Unlike humans, the AGI could improve its intrinsic mental capabilities by adding additional hardware. Unlike humans, an AGI with decent expertise at AI programming could experiment with self-modification. Unlike humans, timelines for AGI evolution are not inherently tied to a ~20 year maturity period. Unlike humans, if the AGI were interested in pursuing the extinction of the human race, there are potentially methods that it could use which it might itself survive with moderate probability.

If the AGI is smarter than most humans, or smarter than all humans, then I would need strong evidence to believe it is not more of a threat than any random human.

And if an AGI can be made as smart as a human, I would be surprised if it could not be made smarter than the smartest human.


>Near term relatively low

Precisely. Above 1% so in the realm of possible, but definitely not above 50% and probably not above 5% in the next 10-15 years. My guesstimate is around 1-2%.

But expand the time horizon to the next 50 years and the cognitive fallacy of underestimating long-term progress kicks in. That’s the timescale that actually produces scary high existential risk with our current trajectory of progress.



I can see AI being used in safety critical systems like cars, it's already happened and it has already killed people.


> You’re not wrong, but existential threats and possible extinction is in the future; maybe ten-fifteen years away if we’re lucky.

The threat from humans leveraging narrow control of AI for power over other humans is, by far, the greatest threat from AI over any timeframe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: