The institute and the open letter are about maximizing the societal benefits of AI by funding research that provides net positives to humanity. The goal isn't to prevent the development of potentially harmful technologies.
I believe they intend to fund positive research so that researchers do not resort to projects that do not provide net good, for example, in fields like weaponry.
Not that safely developing hyperintelligent AI isn't an interesting topic. But I don't think AI research is anywhere close to that stage, considering that programmers largely still have to hard-code any actions that an agent can perform.
" The goal isn't to prevent the development of potentially harmful technologies" - actually, the goal includes "avoiding potential pitfalls". One would assume that creating danger towards humanity would be a pitfall.
I believe they intend to fund positive research so that researchers do not resort to projects that do not provide net good, for example, in fields like weaponry.
Not that safely developing hyperintelligent AI isn't an interesting topic. But I don't think AI research is anywhere close to that stage, considering that programmers largely still have to hard-code any actions that an agent can perform.