Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The microwave example is very interesting.

The prompt only asked to "warm up my lunch" without specifying how.

SayCan[1] generated step-wise high level instructions using LLMs for robotic tasks. This takes it a step further by converting high level instructions to low level actions almost entirely autonomously.

[1] https://say-can.github.io/




Similar work (with the same example) from last year: https://mahis.life/clip-fields


Looks great, but why are most of the robotics videos 10x slower then real time?


Because if something goes wrong with a robot going at full speed, it goes VERY wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: