> There are a lot of ways where this could be actively malicious.
I feel like there's the question we also ask for anything that gets automated: is it worse than what we have without it? Will an AI assistant send you to worse Hotels than a spam-filled Google SERP will? Will it give you fewer wrong information?
The other interesting part is the social interaction component. If it's less psycho ("you said it was 2023, you are a bad person", I guess it was trained on SJW subreddits?), it might help some people learn how to communicate more respectful. They'll have a hard time doing that with a human, because humans typically will just avoid them if they're coming off as assholes. An AI could be programmed to not block them but provide feedback.
I feel like there's the question we also ask for anything that gets automated: is it worse than what we have without it? Will an AI assistant send you to worse Hotels than a spam-filled Google SERP will? Will it give you fewer wrong information?
The other interesting part is the social interaction component. If it's less psycho ("you said it was 2023, you are a bad person", I guess it was trained on SJW subreddits?), it might help some people learn how to communicate more respectful. They'll have a hard time doing that with a human, because humans typically will just avoid them if they're coming off as assholes. An AI could be programmed to not block them but provide feedback.