Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hopefully this is good news for Bing.


Well, hopefully if someone creates a benevolent AGI then it should be good news for everyone.


[EDIT]: friendly -> non-friendly oops.

That's what seems so confusing about HN replies here. (Non-friendly) AGI is an extreme existential risk (depending on who you listen to).

I'm perfectly fine with rewarding the org that's responsible for researching friendly AGI to do it _right_ (extremely contingent on that last bit).


Well, I don't think I'd view any AGI that is an existential risk as "friendly".


What if friendliness is not a property of the technology, but of the use? With all the potential concerns of AGI, I think nuclear technology is a good analogue. It has great potential for peaceful use, but proliferation of nuclear weapons is a so-far inseparable problem of the technology. It's also relatively easy to use nuclear technology unsafely compared to safely.

The precedent for general intelligence is not good. The only example we know of (ourselves) is a known existential threat.


the thing is, nobody knows how to do that. it's not a money problem.


OpenAI is a research company - that's what research is, working out how to do things we don't know how to do. Research requires some money so at one level it is a money problem.


but this is alchemy isn't it? there isn't even a theoretical framework from which we can even begin to suggest how to keep any "general intelligence" benign. good old fashioned research notwithstanding, a billion dollars is not about to change this. it reads to more to me like this is an investment in azure (ie microsoft picking up some machine learning expertise to leverage in its future cloud services). that's not a judgement, and i'm sure lots of cool work will still come from this, given the strength of the team and massive backing they have. it just smells funny.


Alchemy wasn't entirely wrong; it is indeed possible to turn lead into gold, it was just beyond the technology of the time: https://www.scientificamerican.com/article/fact-or-fiction-l....


Well, unlike alchemy there are some pretty good examples of intelligent agents around - some even involved in this project!


(no sarcasm)

You know, I can't prove that researchers being funded is the best way of figuring out how to do things, but I have a gut intuition that tells me that.

I'll look into it so that I'm not just blindly suggesting that $$ ==> people ==> research ==> progress.

Thanks for the opportunity to reflect!


it really is an interesting subject to explore within the philosophy of science :)


You need to be able to test your designs and for that you need resources like AI accelerators.


It certainly is. Eventually Bing will automatically infer that it is not a good enough search engine and will destroy itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: