We'll still release quite a lot, and those releases won't look any different from the past.
> I understand that keeping some innovations private may help commercialization, which may help raise more funds for OpenAI, getting us to AGI faster, so my opinion is that could plausibly make sense.
That's exactly how we think about it. We're interested in licensing some technologies in order to fund our AGI efforts. But even if we keep technology private for this reason, we still might be able to eventually publish it.
I thought from day one that the name «OpenAI» would at best be a slight misnomer, and at worst indicative of a misguided approach. If AGI is close to being achieved, sharing key details of the approach to any actors at all could trigger a Manhattan Project-type global arms race where safety was compromised and the whole thing became insanely risky for the future of humanity.
Glad to see that the team is taking a pragmatic safety-first approach here, as well as towards the near-term economical realities of funding a very expensive project to ensure the fastest possible progress.
In the early days of OpenAI, my thoughts were that the project had good intentions, but a misguided focus. The last year has changed that, though. They absolutely seem to be on the right track. Very excited to see their progress over the next years.
Really? I thought by 1940 physicists generally understood fission and theoretically understood how to build a bomb - they just needed to find enough distilled fissile material (which was hard to do). And indeed, once they had enough U235, they had such a high degree of confidence in the theory, that they built a functioning U235 bomb without ever having previously tested one.
In 1939, Enrico Fermi expressed 90% confidence [0] that creating a self-sustaining nuclear reaction with Uranium was impossible. And, if you're working with U238, it basically is! But it turns out that it's possible to separate out U235 in sufficient quantities to use that instead.
On the 2nd of December, 1942 he led an experiment at Chicago Pile 1 [1] that initiated the first self-sustaining nuclear reaction. And it was made with Uranium.
In fairness to Fermi, nuclear fission was discovered in 1938 [2] and published in early 1939.
> 90% confidence [0] that creating a self-sustaining nuclear reaction with Uranium was impossible
But the fact that Fermi was doing such a calculation in the first place proves that we knew in principle how a fission weapon could work, even if we didn't know "how far off [they] were". As soon as we figured out the moon was just a rock 240,000 miles away, we knew in principle we could go there, even if we didn't know how far off that would be.
By contrast, we don't know what consciousness or intelligence even is. A child could define what walking on the moon is, and Fermi was able to define a self-sustaining nuclear reaction as soon as he learned what nuclear reactions were. What even is the definition of consciousness?
> as soon as we figured out the moon was just a rock 240,000 miles away, we knew in principle we could go there, even if we didn't know how far off that would be
I have problems agreeing with that specific claim, knowing that both "the rock" and the distance were known to some ancient Greeks around 2200 years ago.
Hipparchus estimated the distance to the Moon in the Earth radii to between 62 and 80 (depending on the method he used, as he intentionally used two different). Today's measurements are between 55 and 64.
Holy shit, that is so impressive. They didn't even have Newton's law of gravity yet.
Once we had Newton's law of gravity though, we knew the distance, radius, mass, and even surface gravity of the moon. Would you say it's fair to say that by then we knew in principle we could go there and walk there?
(P.S. I assume you know this but the way you wrote your comment makes it seem like our measurements of lunar distance are nearly as inaccurate as Hipparchus's, when we actually know it down to the millimeter (thanks to retroreflectors placed by Apollo, actually). The wide variation from 55x to 64x Earth's radius is because it changes over the course of the moon's orbit, due to [edit: primarily its elliptical orbit, and only secondarily] the Sun and Jupiter's gravity.)
“Strictly speaking, both bodies revolve around the same focus of the ellipse, the one closer to the more massive body, but when one body is significantly more massive, such as the sun in relation to the earth, the focus may be contained within the larger massing body, and thus the smaller is said to revolve around it.”
No you're right, the Sun and Jupiter are a secondary effect to the elliptical orbit, I skimmed the Wikipedia page too quickly:
> due to its elliptical orbit with varying eccentricity, the instantaneous distance varies with monthly periodicity. Furthermore, the distance is perturbed by the gravitational effects of various astronomical bodies – most significantly the Sun and less so Jupiter
> Once we had Newton's law of gravity though, we knew the distance, radius, mass, and even surface gravity of the moon.
I think it was more complicated than what you assume there. Newton published his Principia 1687 but before 1798 we didn't know the gravitational constant:
> Would you say it's fair to say that by then we knew in principle we could go there and walk there?
If you mean "we 'could' go if we had something what we were sure we haven't had" then there is indeed a written "fiction" story published even before Newton published his Principia:
It's the discovery of the telescope that allowed people to understand that there are another "worlds" and that one would be able to "walk" there.
Newton's impact was to demonstrate that there is no any "mover" (which many before identified as a deity) that provides the motion of the planets but that their motions simply follow from their properties and the "laws." Before, most expected Aristotle to be relevant:
"In Metaphysics 12.8, Aristotle opts for both the uniqueness and the plurality of the unmoved celestial movers. Each celestial sphere possesses the unmoved mover of its own—presumably as the object of its striving, see Metaphysics 12.6—whereas the mover of the outermost celestial sphere, which carries with its diurnal rotation the fixed stars, being the first of the series of unmoved movers also guarantees the unity and uniqueness of the universe."
None of this really counters the core point - that we don't know how long it will be before we have AGI. Is there some way to define consciousness that will be discovered in the future that makes the problem possible?
Your core point (and that of the MIRI article you linked to) is not just "we don't know". It's that the chance of being imminent and catastrophic is worth taking seriously.
I am of course not saying you're wrong that "we don't know". We obviously don't know. It's possible, just like it's possible that we could discover cheap free energy (fusion?) tomorrow and then be in a post-scarcity utopia. But that's worth taking about as seriously as the possibility that we'll discover AGI tomorrow and be in a Terminator dystopia, or also a post-scarcity utopia.
More importantly, it's a distraction from the very real, well-past-imminent problems that existing dumb AI has, such as the surveillance economy and misinformation. OpenAI, to their credit, does a good job of taking these existing problems quite seriously. They draw a strong contrast to MIRI's AI alarmism.
> In 1939, Enrico Fermi expressed 90% confidence [0] that creating a self-sustaining nuclear reaction with Uranium was impossible. And, if you're working with U238, it basically is! But it turns out that it's possible to separate out U235 in sufficient quantities to use that instead.
You are moving goalposts. You mentioned in the first place "fission weapons" and now you take a quote about "nuclear fission reactor" which is a whole different thing.
Almost nobody really knows how developed is the state-of-the-art theory / applied technology in confidentials advances that the usual suspects may have already achieved. I.E. deepmind, openai, baidu, nsa, etc.
AGI could have already been achieved - even theoretically - somewhere, and like when Edison got to make work a light bulb, we're still using oil and not knowing anything about electricity, or light bulbs or energy distribution networks / infrastructure.
The actual current - new, mostly unimplemented yet - technology level.
Back then you wouldn't have believed if someone had said you "hey, city nights in ten years won't be dark anymore"
This is an AI equivalent of believing that the NSA has proved that P=NP and can read everyone's traffic.
There's no way to disprove it, but given that in the open literature people haven't even found a way to coherently frame the question of general AI, let alone theorize about it, it becomes just another form of magical thinking.
You're partially right (because AGI really looks like VERY far away for the current status in theory publicly known), but it's not exactly like "magical thinking".
There are several public examples of radically more advanced theory/technology than the publicly known possible at a certain time/year, kept secret by governments / corps for a very long time (decades).
Lockheed achieved the blackbird decades before it was even admitted that a technology like that could even exist. But, looking backwards, it just looks like an "incremental" advance, but it wasn't, the engineering required to make fly the blackbird was revolutionary for the time when it was invented (back in the 50s / 60s ).
The Lockheed F-117 and its tech had a similar path, just somewhat admitted in late 80s (and this was 70s technology, probably based on theoretical concepts from the 60s).
More or less the same could be said about the tech in Blechtley Park: current tech / theory propelled to extraordinary capabilities by radical improvement achieved by new top secret advances in engineering. The hardware, events and advances ocurred in Bletchley Park were kept secret for years (I think just in the 50s they started to be carefully mentioned but not fully admitted, but nothing even close to the details currently found in the Wikipedia).
At any given time there could be a lot of theory/technology jump-aheads being achieved out there, several decades ahead of the publicly published/known, supposedly current, theory/technology.
The point is, we don't need to know how exactly consciousness work to create an AGI. In theory, we can just simulate all the neurons in the brain on a supercomputer cluster and voila, we have AGI. Of course, it's not that simple but you get my point.
This is a flawed analogy. The conceptual basis of nuclear weapons was well understood as soon as it was learned that the atom has a compact nucleus. The energy needed to bind that nucleus together gives a rough idea of the power of a fission weapon. If that energy could be liberated all at once, it would make an explosive orders of magnitude more powerful than anything known.
It was hard to predict when or if such a thing could be made, but everyone knew what was under discussion.
Compare this to AGI, some vaguely emergent property of a complex computer system that no one can define to anyone else's satisfaction. Attempts to be more precise what AGI is, how it would first manifest itself, and why on earth we should be afraid of it, rapidly devolve into nerd ghost stories.
1932 neutron discovered
1942 first atomic reactor
1945 fission bomb
Now for AI
1897 electron discovered
1940's vacuum tube computers
1970's integrated circuits
1980's first AI wave fails, AI winter begins
2012 AI spring begins
2019 AI can consistently recognize a jpeg of a cat, but still not walk like a cat
???? Human level AGI
It doesn't seem comparable one way or the other, in many ways. But if we do compare them, AI is going much slower and with more failure, backtracking, and uncertainty.
> This is a flawed analogy. The conceptual basis of nuclear weapons was well understood as soon as it was learned that the atom has a compact nucleus. The energy needed to bind that nucleus together gives a rough idea of the power of a fission weapon. If that energy could be liberated all at once, it would make an explosive orders of magnitude more powerful than anything known.
Extrapolating as you seem to be here, when should I expect to see a total conversion reactor show up? I want 100% of the energy in that Uranium, dammit - not the piddly percentages you get from fission!
Seriously, I think you overestimate how predictable nuclear weapons were. Fission was discovered in 1938.
If you read your own Wikipedia link, you'd see that Rutherford's gold foil experiments were started in 1908, his nuclear model of the atom was proposed in 1911—we even split the atom in 1932! (1938 is when we discovered that splitting heavier atoms could release energy rather than consume it.)
We haven't even had the AGI equivalent of the Rutherford model of the atom yet: what's the definition of consciousness? What is even the definition of intelligence?
You might not need a definition of consciousness. Right now it looks like you can get quite far with „fill in the blanks“ type losses (gpt-2 and Bert) in the case of Language understanding and Self-Play in the case of Games.
We are indeed getting impressively far. Four decades after being invented, machine learning went from useless to useful to enormous societal ramifications terrifyingly quickly.
However, we are not getting impressively close to AGI. That's why we need to stop the AGI alarmism and get our act together on the enormous societal ramifications that machine learning is already having.
I think there is a lot of evidence that explosive progress could be made quickly. Alphago zero, machine cision, sentiment analysis, machine translation.. voice.. etc etc etc
All these things have surged incredibly in less than a decade.
Those are all impressive technical achievements to be sure, but they don't constitute evidence of progress toward AGI. If I'm driving my car from Seattle to Honolulu and I make it to San Diego it sure seems like I made a lot of progress?
> I think there is a lot of evidence that explosive progress could be made quickly. Alphago zero, machine cision, sentiment analysis, machine translation.. voice.. etc etc etc
Not at all, these are all one-trick poneys and bring you nowhere close to real AGI which is akin to human intelligence.
The Manhattan Project is a very apt analogy. Even if you believe that AGI is impossible, it should be possible to appreciate that many billions would quickly be invested in its development if somehow a viable pathway to it became clear. Even if just to a few well-connected experts.
This is what happened when it became known nuclear weapons were a viable concept. The technology shifted power to such an extreme degree that it was impossible not to invest in it, and the delay from «likely impossible» to «done» happened too fast for most observers to notice.
The Manhattan project happened when the entire conceptual road map to fission weapons was understood. This is manifestly not the case with AI, which can be charitably described as "add computers until magic".
I didn’t compare OpenAI to the Manhattan Project. I was pointing out that if a small number of people discover a plausible conceptual pathway to AGI, a similar project will happen.
And I'm pointing out that the conceptual breakthroughs that preceded such an engineering sprint happened in the open literature. Wells was writing sci-fi about atomic weapons in 1914. He based it off of a pop-science book written in 1909.
We don't have any such understanding, or even a definition, of 'AGI'.
Wells’ atomic bombs sci-fi was of the type «there is energy in the atom, and maybe someone will use this in bombs someday». Nowhere close to the physical reality of a weapon, more in the realm of philosophy that strong AI currently is. We have an existence proof of intelligence already, after all. The idea is not based on pure fantasy, even though the practicalities are unknown.
Leo Szilard had more plausible philosophical musings in the early thirties, that did not have root in any workable practical idea. The published theoretical breakthroughs you mention didn’t happen until the late thirties. Nuclear fission, the precursor to the idea of an exponential chain reaction, happened only in 1938, 7 years before Trinity.
The issue with strong AI is not that "practicalities are unknown", any more than the issue with Leonardo da Vinci's daydreams of flying machines were that "practicalities are unknown".
He didn't have internal combustion engines, but that's a practicality, other mechanical power sources already existed (Alexander the Great had torsion siege engines). They would never be sufficient for flight, of course, but the principle was understood.
But he could never have even begun to build airfoils, because he didn't have even an inkling of proto-aerodynamics. He saw that birds exist, so he drew a machine with wings that flapped. Look at the wings he drew: https://www.leonardodavinci.net/flyingmachine.jsp
That's an imitation of birds with no understanding behind it. That's the state of strong AI today: we see that humans exist, so we create imitations of human brains, with no understanding behind them.
That lead to machine learning, and after 40 years of research we figured out that if you feed it terabytes of training data, it can actually be "unreasonably effective", which is impressive! How many pictures of giraffes did you have to see before you could instantly recognize them, though? One, probably? Human cognition is clearly qualitatively different.
The danger of machine learning is not that it could lead to strong AI. It's that it is already leading to pervasive surveillance and misinformation. (idlewords is pretty critical of OpenAI, but I actually credit OpenAI with taking this quite seriously, unlike MIRI.)
Why do we assume that AGI requires billions of $? Fundamentally, we don't know how to do it, so it may just require the right software design.
Nuclear weapons required enriched uranium, and the gaseous diffusion process of the time was insanely power-hungry. Like non-negligable (>1% ?) percentage of the US's entire electrical generation power-hungry.
Yes I think the better analogy is Fermat's Last Theorem. It didn't require billions of dollars, it just required one incredibly smart specialist grinding on the problem for years.
The atomic bomb was based on science theory. A computer can run many programs and do a great many things, but it will never be able to think by itself.
Our study of (automated) intelligence is based on science too.
> A computer ... will never be able to think by itself.
Turing wrote an entire paper about this (Computing Machinery and Intelligence), where he rephrases your statement (because he finds it to be meaningless) and devises a test to answer it. He also directly attacks your phrasing of "but it will never":
> I believe they are mostly founded on the principle of scientific induction. A man has seen thousands of machines in his lifetime. From what he sees of them he draws a number of general conclusions. They are ugly, each is designed for a very limited purpose, when required for a minutely different purpose they are useless, the variety of behaviour of any one of them is very small, etc., etc. Naturally he concludes that these are necessary properties of machines in general.
> A better variant of the objection says that a machine can never "take us by surprise." This statement is a more direct challenge and can be met directly. Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do, or rather because, although I do a calculation, I do it in a hurried, slipshod fashion, taking risks.
> A better variant of the objection says that a machine can never "take us by surprise." This statement is a more direct challenge and can be met directly. Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do, or rather because, although I do a calculation, I do it in a hurried, slipshod fashion, taking risks.
This seems like a cop out. Sure, if you do your calculations wrong, it doesn’t behave as you expect. But it’s still doing exactly what you wrote it to do. The surprise is in realizing your expectations were wrong, not that the machine decided to behave differently.
I think any AI researcher has a tale where an algorithm they wrote genuinely took them by surprise. Not due to wrong calculations, but by introducing randomness, heaps of data, and game bounderaries where the AI is free to fill in the blanks.
A good example of this is "move 37" from AlphaGo. This move surprised everyone, including the creators, who were not skilled enough in Go to hardcode it: https://www.youtube.com/watch?v=HT-UZkiOLv8
Investing into a bubble only to make sure the money go to yourself. Seems like a economic loophole. You think computers will start to have dreams and desires? Abusing such a machine would be unethical. Go ahead and build a better OCR, just don't fall to the AGI hype.
All sciences that collaborate with the field of AI: Cognitive Science, Neuroscience, Systems Theory, Decision Theory, Information Theory, Mathematics, Physics, Biology, ...
Any AI curriculum worth its salt includes the many scientific and philosophical views on intelligence. It is not all alchemy, though the field is in a renewal phase (with horribly hyped nomenclature such as "pre-AGI", and the most impressive implementations coming from industry and government, not academia).
And eventhough the atom bomb was based on science too, there is this anecdote from Hamming:
> Shortly before the first field test (you realize that no small scale experiment can be done—either you have a critical mass or you do not), a man asked me to check some arithmetic he had done, and I agreed, thinking to fob it off on some subordinate. When I asked what it was, he said, "It is the probability that the test bomb will ignite the whole atmosphere." I decided I would check it myself! The next day when he came for the answers I remarked to him, "The arithmetic was apparently correct but I do not know about the formulas for the capture cross sections for oxygen and nitrogen—after all, there could be no experiments at the needed energy levels." He replied, like a physicist talking to a mathematician, that he wanted me to check the arithmetic not the physics, and left. I said to myself, "What have you done, Hamming, you are involved in risking all of life that is known in the Universe, and you do not know much of an essential part?" I was pacing up and down the corridor when a friend asked me what was bothering me. I told him. His reply was, "Never mind, Hamming, no one will ever blame you."
It does not need to. It just need to get complex enough. This is from an 1965 article:
"If the machines are permitted to make all their
own decisions, we can’t make any conjectures as to the
results, because it is impossible to guess how such machines might behave. We only point out that the fate of
the human race would be at the mercy of the machines.
It might be argued that the human race would never be
foolish enough to hand over all power to the machines.
But we are suggesting neither that the human race would
voluntarily turn power over to the machines nor that the
machines would willfully seize power. What we do suggest is that the human race might easily permit itself to
drift into a position of such dependence on the machines
that it would have no practical choice but to accept all of
the machines’ decisions. As society and the problems that
face it become more and more complex and as machines
become more and more intelligent, people will let machines make more and more of their decisions for them,
simply because machine-made decisions will bring better
results than man-made ones. Eventually a stage may be
reached at which the decisions necessary to keep the system running will be so complex that human beings will be
incapable of making them intelligently. At that stage the
machines will be in effective control. People won’t be able
to just turn the machine off, because they will be so dependent on them that turning them off would amount to
suicide."
I agree with the above, but imagine the same argument where "the machines" is replaced with "subject-matter experts", or "politicians acting on the advice of subject-matter experts".
The accumulated knowledge and skills of not just specialised individuals but entire institutions, working on highly technical and abstract areas of society, seems like it has created a kind of empathy gap between the people ostensibly wielding power and those who are experiencing the effects of that power (or the limits of that power).
> "... turning them off would amount to suicide."
Although this conclusion appears equally valid in the replacement argument, it sadly doesn't come with the wanted guarantee of "therefore that wouldn't happen".
> A computer can run many programs and do a great many things, but it will never be able to think by itself.
A computer being able to simulate a brain that thinks for itself is the logical extrapolation of current brain-simulation efforts. Many people think there are far less computationally intensive ways to make an AI, but "physics sim of a human brain" is a good thought experiment.
Unless you think there's something magic about human brains? Using "magic" here to mean incomprehensible, unobservable, and incomputable.
I believe ekianjo wasn't talking about neural networks, but simulations using models that are similar to how neurons work. Computational neuroscience is a thing.
> I understand that keeping some innovations private may help commercialization, which may help raise more funds for OpenAI, getting us to AGI faster, so my opinion is that could plausibly make sense.
That's exactly how we think about it. We're interested in licensing some technologies in order to fund our AGI efforts. But even if we keep technology private for this reason, we still might be able to eventually publish it.