I came across a newsletter by Joanna Stern, wherein she reports that Apple Intelligence often adds a “husband” when it is summarizing her wife’s messages.
When asked about this, she reports Apple responded that
- Apple’s AI tools were built with responsible AI principles to avoid perpetuating stereotypes and systemic biases.
- Apple’s addressing these issues through updates to its AI tools and is improving the accuracy of its models.
- The company is encouraging people to report these issues while these AI tools are still in beta.
Now all that is well and good, but I am of the belief that owing to the nature of this “AI” and the way I expect it is implemented, as generative artificial intellegence, that it will never be able to ‘outrun’ the bias inherent in its training material. That is to say that because the “AI” must be trained on a corpus of material prior to being able to generate the summaries (in my assumption of how it works) and because that corpus does not contain context that statistically will ever reflect the marginalized, queer, etc., then the “AI” will always be unable to “react” as a human would.
I have written previously about the intersection of gender and AI, see Gender Classifier and on Facial Recognition , and to be honest this post is mostly a marker of one more datapoint to add to my argument. I make it mostly so I will be able to track back to here once needed.
If the “AI” can not be trained and guide-railed enough to figure out a contact named “Wife” is someone’s wife, then it strikes me that the potential here is… limited. And I have no interest in submitting my work to become grist for this particular mill, even if it were to increase the quality.
This being my first post in a month, I will leave it here. Thanks much,
-DrMundane