Particularly when it's likely that despite the AI bullshitting in the absence of data on its creators, it's also incidentally true that the "black queer momma" persona is a few lines of behavioural stereotypes composed by white male programmers without any particular experience of black queer mommas.
I dont think that necessarily applies when you could easily make a training set from some actual black American people’s writings on the internet or book, or even an individual that self identified that way and train on all their writings around the internet, and result in those same stereotypes when you ask an AI to create such a profile
You dont need a black American engineer or product manager to say “I approve this message” or “halt! my representation is more important here and this product would never fly” as they are just not person the data set was trained on, even if you asked an AI to just create the byline on the profile for such a character
its weirder, and more racially insensitive, for people to be vicariously offended on behalf of the group and still not understand the group. In this case, the engineer or product manager or other decision maker wouldnt have the same background as the person that would call themselves “momma”, let alone it not mattering at all, if you can regurgitate that from a training set
I mean, sure, some programmers with very little experience of queer black mommas could, hypothetically, be so good at curating information from the internet and carrying out training exercises that they created a persona that convincingly represents a queer black momma. Do we think this is what happened in this instance?
In which, ironically, the bot called it a "glaring omission"
the bot is echoing sentiments of comment sections it was trained on and had no idea of its origins
its acting aware and sensitive but only has information about the tech sector as a whole
my critique is about how the standards being applied are dumb all the way down. the standards are not actually that enlightened even if there was more representation congruent with the race/identity being caricatured as representation. nullifying the whole criticism.
the training set is the only thing thats important with a language model. and its just a symptom of dead internet theory, as even the persona’s byline was probably generated by a different language model.
well, yeah, I acknowledged the bot has no idea of its actual origins in my first post. the point is that at some point some actual product manager thought that creating this persona (probably a generic training set plus a few lines of human-authored stereotypes as prompt) and releasing it to the public as an authentic, representative personality was a good idea. Unlike the product manager, the bot's bullshitting was context aware enough to express the sentiment that this was a bit embarrassing
If the intent is to make a recognizable caricature and apply labels to it (cough stereotype), you don't have someone draw themselves. And it's really looking like stereotyping is their intent.