As a follow up to my previous post, I also noticed while looking through the one AI companies page that they had a “gender classifier” for text too.
I had wanted to test their classifier, but was not about to upload my face or anyone else’s to some fucking AI company. But text? I can live with uploading a little of my own text (as a treat).
I started out with some fiction, something with dialog and some action interspersed. In truth it was erotica, but I skipped any descriptive action of actual intercourse. Honestly I was just interested what it would make of it. The result? “Female 70.71% confident”.
Ok, what if I swing the other direction, nonfiction? An excerpt of a blog post from this site or two. Say my last post (linked above). “Male 60.22% confident”. Trying another post I get “Male 67.71% confident”.
The straight ahead, non fiction, or opinion type of work seems to get the male classification. An artifact, I assume, of the gender normative source material and of the patriarchy in publishing, or of the biases of the humans classifying the dataset.
Trying one last example, this time an excerpt from my private writings (my diary/commonplace book takes the form of notes in the apple notes app a lot of times). It certainly leans more on my feelings and such, and not on straight ahead opinion and references. Results for one entry? “Female 66.21% confident”
Now I must admit the whole experiment here gave me some ill feelings, to say the least. Being classified did not sit right with me at all. It feels as though your self is being crushed back into one label or the other and that you have been reduced. But one more thought grabbed my interest.
What would it classify this writing as?
It is like gazing into a mirror, no, as if you can gaze through the eyes of another. How does anyone really take my work? What voice do they hear? I know, in my heart of hearts, that I should not care about such things. Even if I do, the AI will not be the eyes of another human. It is a statistical daydream.
And besides I wrote the word patriarchy (now twice), so I imagine that should add 20% Female points right there.
Nevertheless, I put everything from this sentence to the top into the classifier.
Results: “Female 52.23% confident”.
So a toss up. But I had to know, what if I replaced patriarchy with, say, normativity? Does it make a difference?
I literally clapped my hands and laughed. “Male 50.42% confident”. So it adds exactly 2.65% “female-ness” to say patriarchy twice. lol.
fuck these people and their products. never let them take root and give them no quarter, no serious consideration.
P.S.
I thought suddenly, “what’s 100% confident look like? What could one write to make it sure?”.
How about “I am a woman/I am a man”? Very high confidence there.
Results: “I am a man” : “Male 55.23% confident”.
”I am a woman”: “Female 82.15% confident”.
I had a couple of other thoughts:
“I am a nonbinary”: (I kept the grammar similar in the interests of fairness) “Female 83.79% confident”
“I am a trans man”: “Male 54.96% confident”
“I am a trans woman”: “Female 84.79% confident”
Of course it isn’t designed to interpret anyone actually stating their gender, but still. I hope it shows the hollow nature of the technology. How absolutely unfit for purpose it is. Let alone how its purpose is needless et cetera I’m looping here.
And I just had fun fucking around with it. Costs them money too, I imagine, to run the queries.
[…] also: gender classifier. I can report from experience how the algorithms they develop seem to lean into pushing content […]
Pingback by Links you should read – 2024-11-8 « MundaneBlog — November 9, 2024 @ 3:02 am