Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>What problems?

- Privacy (How do you get an artificial intelligence to recognize, and respect, privacy? What sources is it allowed to use, how must it handle data about individuals? About groups? When should it be allowed to violate/exploit privacy to achieve an objective?)

- Isolation (How much data do you allow it access to? How do you isolate it? What safety measures do you employ to make sure it is never given a connection to the internet where it could, in theory, spread itself not unlike a virus and gain incredibly more processing power as well as make itself effectively undestroyable? How do you prevent it from spreading in the wild and hijacking processing power for itself, leaving computers/phones/appliances/servers effectively useless to the human owners?)

- A kill switch (under what conditions is it acceptable to pull the plug? Do you bring in a cybernetic psychologist to treat it? Do you unplug it? Do you incinerate every last scrap of hardware it was on?)

- Sanity check/staying on mission (how do you diagnose it if it goes wonky? What do you do if it shows signs of 'turning' or going off task?

- Human agents (Who gets to interact with it? How do you monitor them? How do you make sure they aren't being offered bribes for giving it an internet connection or spreading it in the wild? How do you prevent a biotic operator from using it for personal gain while also using it for the company/societal task at hand? What is the maximum amount of time a human operator is allowed to work with the AI? What do you do if the AI shows preference for an individual and refuses to provide results without that individual in attendance? If a human operator is fired, quits or dies and it negatively impacts the AI what do you do?)

This is why I've said elsewhere in this thread, and told Sam Altman, that they need to bring in a team of people that specifically start thinking about these things and that only 10-20% of the people should be computer science/machine learning types.

OpenAI needs a team thinking about these things NOW, not after they've created an AGI or something reaching a decent approximation of one. They need someone figuring out a lot of this stuff for tools they are developing now. Had they told me "we're going to train software on millions of web pages, so that it can generate articles" I would have immediately screamed "PUMP THE BRAKES! Blackhat SEO, Russian web brigades, Internet Water Army, etc etc would immediately use this for negative purposes. Similarly people would use this to churn out massive amounts of semi-coherent content to flood Amazon's Kindle Unlimited, which pays per number of page reads from a pool fund, to rapidly make easy money." I would also have cautioned that it should only be trained on opt-in, vetted, content suggesting that using public domain literature, from a source like Project Gutenberg, would likely have been far safer than the open web.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: