The following two ideas have increasingly been bouncing around my head lately:
a) In early 2022, a lot of people were claiming that "we're entering an AI winter, deep learning has reached its peak!". Since then we've seen several successive SOTA image generation models, ChatGPT, and now GPT-4. In just a single year! And we don't seem to be hitting the tail of rapidly diminishing returns yet. The pace of development is far outstripping society's (and governments') ability to perceive & adapt.
b) No human has demonstrated the capability of actually understanding/explaining how any of these trained models encode the high-level concepts & undertstanding that they demonstrate. And yet we have so many people confidently providing lengthy lower bounds on timelines for AGI development. The only tool I have to work with, that I do understand, is thermodynamics. There are about 8 billion strong examples that general intelligence requires on the order of 10 measly watts, and about 1kg of matter. From a thermodynamic point of view, general intelligence is clearly not special at all. This leads me to the belief that we likely already have the computational capability to achieve AGI today, and we simply don't have the right model architecture. That could change literally overnight.
What might the world look like once AGI is achieved? What happens when the only thing that has set humanity apart from animals is cheaply replicable at-scale in hardware? What happens if a small number of entities end up permanently controlling AGI, and the rest of humanity's usefulness has been downgraded to that of a discardable animal?
AGI could arrive this year, or it might still be 50 years away. Literally nobody can provide a concrete timeline, because nobody actually understands how any of this truly works. But we can still reason about how AGI would impact the world, and start putting safeguards into place to ensure that it's used for our collective good.
> b) No human has demonstrated the capability of actually understanding/explaining how any of these trained models encode the high-level concepts & undertstanding that they demonstrate. And yet we have so many people confidently providing lengthy lower bounds on timelines for AGI development. The only tool I have to work with, that I do understand, is thermodynamics. There are about 8 billion strong examples that general intelligence requires on the order of 10 measly watts, and about 1kg of matter. From a thermodynamic point of view, general intelligence is clearly not special at all. This leads me to the belief that we likely already have the computational capability to achieve AGI today, and we simply don't have the right model architecture. That could change literally overnight.
Perhaps human brains are more energy-efficient at doing their thing, and if we tried to replicate this with digital computers it would require more than 10 watts.
If that's the case, we have the potential of building computers that are vastly more efficient than that, simply because our computers don't need to spend energy for surviving
a) In early 2022, a lot of people were claiming that "we're entering an AI winter, deep learning has reached its peak!". Since then we've seen several successive SOTA image generation models, ChatGPT, and now GPT-4. In just a single year! And we don't seem to be hitting the tail of rapidly diminishing returns yet. The pace of development is far outstripping society's (and governments') ability to perceive & adapt.
b) No human has demonstrated the capability of actually understanding/explaining how any of these trained models encode the high-level concepts & undertstanding that they demonstrate. And yet we have so many people confidently providing lengthy lower bounds on timelines for AGI development. The only tool I have to work with, that I do understand, is thermodynamics. There are about 8 billion strong examples that general intelligence requires on the order of 10 measly watts, and about 1kg of matter. From a thermodynamic point of view, general intelligence is clearly not special at all. This leads me to the belief that we likely already have the computational capability to achieve AGI today, and we simply don't have the right model architecture. That could change literally overnight.
What might the world look like once AGI is achieved? What happens when the only thing that has set humanity apart from animals is cheaply replicable at-scale in hardware? What happens if a small number of entities end up permanently controlling AGI, and the rest of humanity's usefulness has been downgraded to that of a discardable animal?
AGI could arrive this year, or it might still be 50 years away. Literally nobody can provide a concrete timeline, because nobody actually understands how any of this truly works. But we can still reason about how AGI would impact the world, and start putting safeguards into place to ensure that it's used for our collective good.
But we won't, and it's going to be a wild ride.