Thanks; that might be fun. You can also email me at the same userid at kurtz-fernhout.com.
For reference (nothing new for you, but others may find of interest) I just found a writeup on the Terregator where the document was started by others in 1984 but finished by Kevin Dowling years later: https://www.researchgate.net/publication/2299052_The_Terrega...
I was very surprised to learn Kevin was leaving CMU RI for the LED lighting company -- given his commitment to robotics for so long. But, aside from interests perhaps changing over time, perhaps it's an example of people in robotics starting out trying to solve some big complex integrated thing of a general-purpose robot (e.g. Uranus, Terregator, Alvan, etc.) and then ending up focusing on various related details (solid-state electronics and sensors for Kevin; simulations and UI software for me).
And of course Mark Raibert's lab at CMU, then MIT, then BD, has (for decades) focused on improving walking. Walking was an area in which incremental progress could be made in contrast to more complex navigation tasks and manipulation tasks that may have seemed intractable in practice (especially in the 1980s). It's not clear to me though how much special insight BD might have into the larger unstructured navigation issue though.
Yet, as every piece of individual robotics technology gets refined by someone, whether walking, 3D scanning, image recognition, reliable mobile power sources, touch sensors, terrain mapping, and so on, we do get closer to someone putting all those things together again in a way we might have imagined in the 1980s but were always disappointed by each component's limitations. Essentially, BD's progress is necessary in terms of removing another excuse for not having amazing robots (i.e. walking is hard, but BD solved it) -- but by itself that is not sufficient to deliver the robots imagined in sci-fi stories (assuming we really still want them).
I can also guess there may have been a lot of falls edited out of that video? :-) For reference, a section of a larger video showing lots of falling walking robots:
https://youtu.be/xEwtM0pKOV0?t=1154
Having linked to that collection of robot falls, I still feel it is at most a matter of time before all sorts of walking robots (and manipulating robots) can reliably do much better in unstructured environments. It's hard to predict exactly how long though -- whether years or decades. I should have been clearer in my previous post that while computer-controlled mobility is increasingly a solved problem like the BD video shows, using mobility effectively when navigating unstructured environments remains an open issue (similar as for manipulating in unstructured environments remains open). That may sound like a subtle distinction perhaps, but in the 1980s (as you undoubtedly know from first hand experience) just getting a computer to control a big motor reliably was a big deal.
As with your point on baby steps, another example is there are now about 200,000 Kiva robots in Amazon warehouses (somewhat structured environments) where automation makes a big difference in reducing costs -- even if the "robots" are not humanoid in form and not general purpose in function.
Coincidentally, I currently work on a robotics project -- a gene sequencer -- but I rarely think of it as doing "robotics". That is mostly because the robot itself is essentially an off-the-shelf commercialized device integrated into a larger unit with lots of other specialized hardware (and it is the specialized hardware including custom chips and related reagents and laser beams that are the star of the show). I myself generally just work on UI stuff that essentially forms an ecosystem that surrounds that device. I don't usually work on software running on the device itself -- which others do, who I sometimes envy. :-) And then I think of all the headaches involved in making robotics work reliably, and I am glad again to only have to wrestle with JavaScript-related technical quirks. :-) Ideally, the customer never has to think about the robot. They can't even see the robot when the device is closed and in operation. Ultimately, the customer doesn't even really care that there is a robot in the big box they bought -- they want to see results (even if the robot is essential right now to delivering the results).
Maybe like with all technology, as in Heinlein's "Rolling Stones" novel's discussion of advanced in propulsion from complex IC engine to simpler nuclear rocket, one can wonder how any system using a robot might be simplified to remove the robot entirely to reduce costs and increase reliability. For example, one could make a robot that washes dishes like a human does, picking up one at a time and rinsing it under a faucet and drying it with a towel -- but using a special-purpose dishwasher is overall more effective and energy efficient. However, someone still needs to load the dishwasher and put the plates away in a cabinet, but that usually does not take much time or trouble. And maybe that is an aspect of robotics in general as a field -- that when we set out towards some goal involving figuring out clever ways to deliver the results that currently involve human-like motions (walking, carrying, stacking, looking, etc.), often we eventually achieve the same results or better in different specialized ways. So, "mechanism" advances, even if anthromorphic "robots" get eventually bypassed. The value becomes in creating mechanisms to do thing that humans can't do easily or enjoyably or reliably or profitably. The 1956 Theodore Sturgeon story "The Skills of Xanadu" has a spin on that, where "work" is made into play using nanotech and networks and mobile computing.
For another example, any modern car is essentially a robot full of computers and sensors and actuators -- but it does not look or work like a human or other organic creature. We don't usually say 2020s car mechanics are "robot technicians" even though that really is what they are by 1980s standards of the ancestral Terregator. That is because the "robotics" aspect of cars has disappeared into the background culturally through incremental progress.
And even designwise, there are also no direct "horse" components in a Tesla electric car, but a Tesla still delivers similar (or better) results in most ways to the horse-drawn carriage which was its ancestor (or human-drawn carriage for rickshaws). Although, "better" perhaps ignores that horses were self-replicating, horses had more "horse sense" about navigating unstructured environments to bring home drunks, horse carts could be refueled anywhere there was grass, horse hooves and wide horse-cart wheels could handle rougher terrain than many cars, and so on (objectives still remaining for advanced robotics). So, some things also got lost along the way amidst other advantages (like not needing to deal with horse manure or abandoned dead horses in cities, which used to be a tremendous problem).
So, two steps forward but one step back as so often with technology. Let's just hope overall technology is not two steps forward and two steps back though (at the risk of quoting another Paula Abduhl song). Or worse, two steps forward and three steps back. See the book "Retrotopia" for thoughts on technical advances taken together sometimes leading to more problems then they solve.
When electric motors were new, a home might have one that multiple devices could be connected to. Now most people would have a hard time saying how many electric motors (or other actuators) are in their home (e.g. in the microwave, in the dishwasher, in the CD player, etc). The same is happening with computers and sensors as they become embedded in so many devices. Robotics in that sense is perhaps increasingly all around us -- even as we notice it less and less?
Economically, making better special-purpose tools (like a dishwasher) is not that threatening to the income-through-jobs link on which our current economy rests for distributing purchasing power (like discussed in "The Triple Revolution Memorandum" from 1964). That is because humans are still needed in the loop somewhere. In contrast, general purpose robots (and especially general purpose AI) and related larger systems will stretch that link much more to the breaking point. So the broader question is how (or if?) we (who?) want to structure our technosphere and economics so it remains (non-cyborged-)human-compatible...
Here is my own dystopian/utopian commentary on that from 2010 involving robots inspired by Marshall Brain's "Manna": :-)
"The Richest Man in the World: A parable about structural unemployment and a basic income"
https://www.youtube.com/watch?v=p14bAe6AzhA
For reference (nothing new for you, but others may find of interest) I just found a writeup on the Terregator where the document was started by others in 1984 but finished by Kevin Dowling years later: https://www.researchgate.net/publication/2299052_The_Terrega...
Looking at that sparks some more thoughts...
Kevin himself demonstrates an interesting example of robotics as a career path. He was the first CMU Robotics Institute employee -- but ultimately left to work for a LED lighting company, then on to wearable sensors, and now on to 3D scanning: https://www.cs.cmu.edu/news/dowling-receives-alumni-award https://www.topionetworks.com/people/kevin-dowling-54395e99a...
I was very surprised to learn Kevin was leaving CMU RI for the LED lighting company -- given his commitment to robotics for so long. But, aside from interests perhaps changing over time, perhaps it's an example of people in robotics starting out trying to solve some big complex integrated thing of a general-purpose robot (e.g. Uranus, Terregator, Alvan, etc.) and then ending up focusing on various related details (solid-state electronics and sensors for Kevin; simulations and UI software for me).
And of course Mark Raibert's lab at CMU, then MIT, then BD, has (for decades) focused on improving walking. Walking was an area in which incremental progress could be made in contrast to more complex navigation tasks and manipulation tasks that may have seemed intractable in practice (especially in the 1980s). It's not clear to me though how much special insight BD might have into the larger unstructured navigation issue though.
Yet, as every piece of individual robotics technology gets refined by someone, whether walking, 3D scanning, image recognition, reliable mobile power sources, touch sensors, terrain mapping, and so on, we do get closer to someone putting all those things together again in a way we might have imagined in the 1980s but were always disappointed by each component's limitations. Essentially, BD's progress is necessary in terms of removing another excuse for not having amazing robots (i.e. walking is hard, but BD solved it) -- but by itself that is not sufficient to deliver the robots imagined in sci-fi stories (assuming we really still want them).
I can also guess there may have been a lot of falls edited out of that video? :-) For reference, a section of a larger video showing lots of falling walking robots: https://youtu.be/xEwtM0pKOV0?t=1154
Having linked to that collection of robot falls, I still feel it is at most a matter of time before all sorts of walking robots (and manipulating robots) can reliably do much better in unstructured environments. It's hard to predict exactly how long though -- whether years or decades. I should have been clearer in my previous post that while computer-controlled mobility is increasingly a solved problem like the BD video shows, using mobility effectively when navigating unstructured environments remains an open issue (similar as for manipulating in unstructured environments remains open). That may sound like a subtle distinction perhaps, but in the 1980s (as you undoubtedly know from first hand experience) just getting a computer to control a big motor reliably was a big deal.
As with your point on baby steps, another example is there are now about 200,000 Kiva robots in Amazon warehouses (somewhat structured environments) where automation makes a big difference in reducing costs -- even if the "robots" are not humanoid in form and not general purpose in function.
Coincidentally, I currently work on a robotics project -- a gene sequencer -- but I rarely think of it as doing "robotics". That is mostly because the robot itself is essentially an off-the-shelf commercialized device integrated into a larger unit with lots of other specialized hardware (and it is the specialized hardware including custom chips and related reagents and laser beams that are the star of the show). I myself generally just work on UI stuff that essentially forms an ecosystem that surrounds that device. I don't usually work on software running on the device itself -- which others do, who I sometimes envy. :-) And then I think of all the headaches involved in making robotics work reliably, and I am glad again to only have to wrestle with JavaScript-related technical quirks. :-) Ideally, the customer never has to think about the robot. They can't even see the robot when the device is closed and in operation. Ultimately, the customer doesn't even really care that there is a robot in the big box they bought -- they want to see results (even if the robot is essential right now to delivering the results).
Maybe like with all technology, as in Heinlein's "Rolling Stones" novel's discussion of advanced in propulsion from complex IC engine to simpler nuclear rocket, one can wonder how any system using a robot might be simplified to remove the robot entirely to reduce costs and increase reliability. For example, one could make a robot that washes dishes like a human does, picking up one at a time and rinsing it under a faucet and drying it with a towel -- but using a special-purpose dishwasher is overall more effective and energy efficient. However, someone still needs to load the dishwasher and put the plates away in a cabinet, but that usually does not take much time or trouble. And maybe that is an aspect of robotics in general as a field -- that when we set out towards some goal involving figuring out clever ways to deliver the results that currently involve human-like motions (walking, carrying, stacking, looking, etc.), often we eventually achieve the same results or better in different specialized ways. So, "mechanism" advances, even if anthromorphic "robots" get eventually bypassed. The value becomes in creating mechanisms to do thing that humans can't do easily or enjoyably or reliably or profitably. The 1956 Theodore Sturgeon story "The Skills of Xanadu" has a spin on that, where "work" is made into play using nanotech and networks and mobile computing.
For another example, any modern car is essentially a robot full of computers and sensors and actuators -- but it does not look or work like a human or other organic creature. We don't usually say 2020s car mechanics are "robot technicians" even though that really is what they are by 1980s standards of the ancestral Terregator. That is because the "robotics" aspect of cars has disappeared into the background culturally through incremental progress.
And even designwise, there are also no direct "horse" components in a Tesla electric car, but a Tesla still delivers similar (or better) results in most ways to the horse-drawn carriage which was its ancestor (or human-drawn carriage for rickshaws). Although, "better" perhaps ignores that horses were self-replicating, horses had more "horse sense" about navigating unstructured environments to bring home drunks, horse carts could be refueled anywhere there was grass, horse hooves and wide horse-cart wheels could handle rougher terrain than many cars, and so on (objectives still remaining for advanced robotics). So, some things also got lost along the way amidst other advantages (like not needing to deal with horse manure or abandoned dead horses in cities, which used to be a tremendous problem).
So, two steps forward but one step back as so often with technology. Let's just hope overall technology is not two steps forward and two steps back though (at the risk of quoting another Paula Abduhl song). Or worse, two steps forward and three steps back. See the book "Retrotopia" for thoughts on technical advances taken together sometimes leading to more problems then they solve.
When electric motors were new, a home might have one that multiple devices could be connected to. Now most people would have a hard time saying how many electric motors (or other actuators) are in their home (e.g. in the microwave, in the dishwasher, in the CD player, etc). The same is happening with computers and sensors as they become embedded in so many devices. Robotics in that sense is perhaps increasingly all around us -- even as we notice it less and less?
Economically, making better special-purpose tools (like a dishwasher) is not that threatening to the income-through-jobs link on which our current economy rests for distributing purchasing power (like discussed in "The Triple Revolution Memorandum" from 1964). That is because humans are still needed in the loop somewhere. In contrast, general purpose robots (and especially general purpose AI) and related larger systems will stretch that link much more to the breaking point. So the broader question is how (or if?) we (who?) want to structure our technosphere and economics so it remains (non-cyborged-)human-compatible...
Here is my own dystopian/utopian commentary on that from 2010 involving robots inspired by Marshall Brain's "Manna": :-) "The Richest Man in the World: A parable about structural unemployment and a basic income" https://www.youtube.com/watch?v=p14bAe6AzhA