What if friendliness is not a property of the technology, but of the use? With all the potential concerns of AGI, I think nuclear technology is a good analogue. It has great potential for peaceful use, but proliferation of nuclear weapons is a so-far inseparable problem of the technology. It's also relatively easy to use nuclear technology unsafely compared to safely.
The precedent for general intelligence is not good. The only example we know of (ourselves) is a known existential threat.
OpenAI is a research company - that's what research is, working out how to do things we don't know how to do. Research requires some money so at one level it is a money problem.
but this is alchemy isn't it? there isn't even a theoretical framework from which we can even begin to suggest how to keep any "general intelligence" benign. good old fashioned research notwithstanding, a billion dollars is not about to change this. it reads to more to me like this is an investment in azure (ie microsoft picking up some machine learning expertise to leverage in its future cloud services). that's not a judgement, and i'm sure lots of cool work will still come from this, given the strength of the team and massive backing they have. it just smells funny.
I agree. We've not gone back to having well funded research facilities like this one since Bell Labs. Those were amazing days where we saw a lot of amazing breakthroughs. I wish companies like Microsoft and the rest would invest in external research institutes more.
I often think that if I were a billionaire,I'd rather spend hundreds of millions on some cool R&D projects rather than having some 100 meter boat that one uses a only a few times a year. I could at least walk around in 20 or so years and say "I funded this" rather than pointing at a rusting boat nobody will ever care about.
In fact this tradition of rich people founding universities and research is nothing new. Stanford University was founded by a couple who said "The children of California shall be our children" after their child died. Andrew Carnegie founded the Carnegie Technical Schools, and John Harvard donated money and his library to a college founded two years earlier.
Allen donated about $2 billion to charitable causes [1] when he was alive. This included several whimsical stuff that wasn't research like Experience Music Project and Museum of Science Fiction. Relatively I believe he spent far more on luxuries he liked including world's most expensive yatch(es), fleet of private jets, mensions around the world, private music concerts, few sports teams here and there.
> This included several whimsical stuff that wasn't research like Experience Music Project and Museum of Science Fiction.
While not research, those things can have profound impacts on people. Several years ago a Star Wars exhibit came to the Indiana State Museum here in Indianapolis, they had an entire section dedicated to both the prosthetic devices in the film and in real life, one of the video segments playing next to some props from the film and real prosthetic devices was a clip of one of the inventors of the real technology talking about how watching the film version directly led to him pursuing his career and working directly on various prosthetic devices trying to make it a reality.
These sorts of experiences could have profound impact on the creative process for one or more individuals that might have far more profound effects for society than active research.
That’s a strange sentiment for thread on OpenAI, considering it is one of many startups founded a guy who decided to take the millions from his sale of PayPal and do cool R&D projects like spaceships, electric cars, solar power, AI, and brain-machine interfaces. Good thing Elon Musk didn’t buy a boat I guess.
> I often think that if I were a billionaire,I'd rather spend hundreds of millions on some cool R&D projects rather than having some 100 meter boat that one uses a only a few times a year.
Billionaires buy cars and boats because they're stores of value. For instance, a Mclaren from the 90s is worth more today than when it was sold.
This article is more than five years old, so I'll let it speak for itself:
This shows that in the 12 months to the end of June the value of classic cars as a whole was up by 28%, which compared with a rise of 12% for the FTSE-100 index of leading shares and a 23% slump in the price of gold.
The research that OpenAI’s doing is groundbreaking and the results are often beyond state-of-the-art. I aim to work in one of your research teams sometime!
Watch the Kool-Aid intake and you'll be just fine. Dreams are great and an absolute necessity for success but create your own. Don't buy into everything you hear, especially Elon Musk talking about Artificial General Intelligence.
Oh, I'm well aware of the hype around AGI. My personal view is that AGI is kind of an asymptotic goal, something we'll get kind of close to but never actually reach. Nevertheless, I would like to work on more pragmatic goals, like improving the current state-of-the-art language models and text generation networks. I'm actually starting by reimplementing Seq2Seq as described by Quoc Le et al.[1] for text summarization[2] (this code is extremely messy but it'll get better soon). It's been interesting to learn about word embeddings, RNNs and LSTMs, and data processing within the field of Natural Language Processing. Any tips on how to get up to speed within this field would be helpful, as I'm trying to get into research labs doing similar work at my university.
AGI is not something unnatural that could never be attained. If biological systems can somehow attain it, there is no reason other kinds of man-made system cannot attain it.
The first main issue is that of compute capacity.
Human brain has equivalent of at least 30 TFLOPS of computing power and this estimate is very likely 2 orders of magnitudes off.
Assume that somehow simulating 1 synapse takes only 1 transistor (gross underestimate). To simulate number of synapses in a single human brain then would need same number of transistors as in 10,000 NVidia V100 GPUs, one of the largest mass produced silicon chip!
The second main issue is of training neurons that are far more complex than our simple arithmetic adders. Back prop doesn't work for such complex neurons.
The 3rd big problem is that of training data. Human child churns through roughly 10 years of training data before reaching puberty. The man-made machine perhaps can take advantage of vast data already available but still there needs to be some structured training regiment.
So current AI efforts in relative comparison of human brain are playing with toy hardware and toy algorithms. It should be surprising that we have gone so far regardless.
>My personal view is that AGI is kind of an asymptotic goal, something we'll get kind of close to but never actually reach.
Personally, I think it is only a matter of time. Though I suspect that we will probably 'cheat' our way there first with the wetware from cultured neurons that various groups are developing, before we manage to create it in completely synthetic hardware. Also, it might just be the wetware that leads us to the required insights. This is very problematic territory however. I think we are very likely to utterly torture some of the conscious entities created along this path.
Have you thought about using AI instead of parts of the government? There must be a lot of bits that can be automated. Do you think that an AI led government could remove the left/right divide that there is at the moment?
If everyone just filled in a huge form that told the AI what was important to them, this could be used to drive policy.
Filling in a from about what is important is just a proxy for voting.
I don't think an AI would help the Left/Right divide in this way because certain news outlets would still have the same incentives to manipulate what people desire in a more extreme directions.
Indeed. Going by past leaps in science and technology, we will probably see something really cool and useful come out of thisn something that isn't AGI.
I'm fine with getting a superbike even if the funding was for an impossible FTL drive.
Just wanted to validate that this place was awesome to interview at. Extremely polite with regards to your time, and engaging people the whole way through.
So I've oddly enough had somewhat of a similar problem with my cats chewing through our various cables as well, though they don't surround me threateningly...
One solution might be creating an unpleasant texture on the cable (by using a cable sleeve). If it's sufficiently unpleasant and not the same texture as before, it might be enough of a deterrent.