Hacker News new | past | comments | ask | show | jobs | submit login

I think they're motivated by more practical concerns, as well -- which is cooperation between different ai agents.



I agree, and they explicitly state this as one of their three goals (" is an important step forward for developing multi-agent AI systems, for building intermediating technology for machine-human interaction, and for advancing the progress on interpretable AI.")

However, I think this will be even more valuable in human-computer interactions (their second goal).

Consider: if you just had a fight with a lover, and then had a bad day at work, and Alexa recommends that you watch a Black Mirror, it might be awful timing. In fact, as humans, we know if someone walks in panting, and frowning, and slams their hands on the desk, we shouldn't crack a joke. I think there was some research out of Google X about how much more useful robots seemed if they gave out signs of what they were doing (appeared frustrated if they couldn't complete a task etc).


That might be the best time to crack a joke.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: