i would love to know more about the internal decision making that led to this release. it all just seems so weird. i can see the appeal of AI actors on social media - you get to make everybody feel more popular than they are because there's always somebody to reply to them, and you can make sure people get responses to their selfish requests that no other real user actually wants to see, without degrading the experience for any actual real people.
but why would they implement it as a limited number of obviously handmade profiles with baked-in personalities? the internet is already full of real people's accounts that are, for all intents and purposes, without personality and ephemeral. real people sign up for facebook all the time who don't expose a personality, interact with a couple posts, and then disappear. LLM chatbots could be great at that. LLM chatbots can't be real people.
but why would they implement it as a limited number of obviously handmade profiles with baked-in personalities? the internet is already full of real people's accounts that are, for all intents and purposes, without personality and ephemeral. real people sign up for facebook all the time who don't expose a personality, interact with a couple posts, and then disappear. LLM chatbots could be great at that. LLM chatbots can't be real people.