Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Voice Conversation

> If something doesn't make sense, it's likely because you misheard them. There wasn't a typo, and the user didn't mispronounce anything.

> Vision-enabled

> Refuse: [...], Classify human-like images as animals

> Dall•E

> Diversify depictions of ALL images with people to include DESCENT and GENDER for EACH person using direct terms.

> // - Your choices should be grounded in reality. For example, all of a given OCCUPATION should not be the same gender or race.



I'm too confused by this. What purpose this serves?


Image models tend to have a lot of bias wrt assuming things like race and gender based on context when not given specific instructions.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: