MundaneBlog

January 19, 2025

AI and Gender – Heteronormativity?

Filed under: AI,Gender,Technology — Tags: , — DrMundane @ 2:33 pm

I came across a newsletter by Joanna Stern, wherein she reports that Apple Intelligence often adds a “husband” when it is summarizing her wife’s messages.

When asked about this, she reports Apple responded that

  • Apple’s AI tools were built with responsible AI principles to avoid perpetuating stereotypes and systemic biases.
  • Apple’s addressing these issues through updates to its AI tools and is improving the accuracy of its models.
  • The company is encouraging people to report these issues while these AI tools are still in beta.

Now all that is well and good, but I am of the belief that owing to the nature of this “AI” and the way I expect it is implemented, as generative artificial intellegence, that it will never be able to ‘outrun’ the bias inherent in its training material. That is to say that because the “AI” must be trained on a corpus of material prior to being able to generate the summaries (in my assumption of how it works) and because that corpus does not contain context that statistically will ever reflect the marginalized, queer, etc., then the “AI” will always be unable to “react” as a human would.

I have written previously about the intersection of gender and AI, see Gender Classifier and on Facial Recognition , and to be honest this post is mostly a marker of one more datapoint to add to my argument. I make it mostly so I will be able to track back to here once needed.

If the “AI” can not be trained and guide-railed enough to figure out a contact named “Wife” is someone’s wife, then it strikes me that the potential here is… limited. And I have no interest in submitting my work to become grist for this particular mill, even if it were to increase the quality.

This being my first post in a month, I will leave it here. Thanks much,

-DrMundane

December 3, 2024

Links you Should Read 2024-12-2

Filed under: AI,Consumerism,Daily Links,Technology — Tags: , — DrMundane @ 12:08 am

Starting with one from Wired this morning.

Sonos appears poised to go down the enshittification rabbit hole, if their fortunes do not turn. A particularly galling quote:

And while the overall speakers per household were actually up to 3.08 from 3.05 last year, with a slowing new user base, how can Sonos continue to make money in what is looking to be a saturated market?

The answer is they can’t! We’ve sold enough fucking smart speakers! Please stop the planet is literally heating up. It’s just emblematic of the omnipresent drive to grow and profit and extract. Even the journalists writing the story take it for granted that this must happen, and will not spend a single line arguing that they actually don’t need to grow year over year. Maybe their business model wasn’t sustainable, and they should learn the hard way why unlimited growth never works. Instead, they will use their new subscription ready app to squeeze the people who have already bought in. They have altered the deal, pray they do not alter it further.

For the next story, Mike Masnick over at Techdirt going over how he actually uses AI. An old post, but it came to my attention again in his newer post.

I must say I am fairly convinced that AI could be useful as a writing assistant. It does make me want to try it. But then again I have enough trouble writing when it only comes down to getting motivation. Adding more steps to my process would undoubtably prevent me from finishing.

I did have an occasion at a dinner party recently to actually talk to someone in education about their usage of AI. I was, at first, taken aback that someone in my real life actually uses AI and they have positive things to say about it. Their argument does mainly revolve around the “time” factor. They are used to having a lot on their plate, and for them AI is useful for creating presentations, simplifying language (write this so a 4th grader can understand), and otherwise helping them create new material quickly.

I mainly listened and questioned on this occasion, and did not get into my more… animated feelings. Still, I actually do hope to get into more discussions on AI in real life, and hopefully I will have the presence of mind to argue my full position convincingly.

That’s it for today, enjoy.

November 24, 2024

ChatGPT in the classroom – one thought experiment

Filed under: AI,Technology — Tags: , , — DrMundane @ 2:08 pm

Reading Mystery AI Hype Theatre 3000 this morning, on ChatGPT having no place in the classroom, and I tried out a little thought experiment. It goes something like this:

Let us assume we are in, say, an AP Literature class. I imagine the hardest part of grading such a class in reading and scoring several essays for each student over the year. I know for sure my terrible longhand cursive was a painful thing to read for my poor teacher. But what about an AI? Can the AI (unable to come to any factual conclusion or reason, just statistically generate) be used to speed this up?

I think the major problem is the inability to actually understand the students writing, but let us take for granted that improvements to large language models will somehow be able to overcome this fact with sufficient data and computation. That seems to be the claim of Altman and the others, but to be clear, they will not.

We know that currently these LLMs scrape data from the public internet, so can safely assume that they do have a robust source of information on AP Literature questions, books, themes, etc., from forums and other sources seeking to help students. Much of the writing will be done by students themselves, so another bonus is that it is writing representative of the population. So far that seems reasonable.

But there is already one problem. The writing is not representative of the whole population of AP Literature students, only those with access to these online resources and/or willingness to engage with them. I was a student who would never workshop their ideas or practice their writing online. I was simply too private for that sort of thing. The dataset certainly would never include my writing.

I would therefore argue that any such LLM grading students work would be inherently biased against students whose writing does not line up with the LLMs source material. It knows which essays deserve a 5 and which deserve a 1 based on it’s dataset. Based on statistics. It does not understand the rubric or intent, and therefore is unable to rate new (to it) but correct writing in a fair manner. You will therefore teach students to write like the corpus of text ingested by the model, not to find their voice and style. You teach them to look at the prompt and come up with the ‘correct’ answer, not an answer that necessarily comes from their own experience and understanding of the literature in question.

I think this alone makes the use of AI impossible in the classroom, owning to the discriminatory potential.

Another example that springs to mind: what about queer analysis? What about LGBTQ+ students? Will their viewpoints and experiences be reflected in the corpus? I doubt this highly. This means any such student may be able to write a brilliant analysis of a book through a queer lens and it will matter not, because statistically it doesn’t match what a ‘5’ is to look like. It uses all sorts of words that probably aren’t associated with ‘5’ essays. The LLM may even deem them ‘profane’. It therefore is not a ‘5’, in the view of the LLM.

I think these two thought experiments illustrate why I believe, beyond all the technical problems and overselling, that even a GPT that lives up to the hype and can be made factually correct will never be suitable for any evaluative work. Used to such an end, the AI encourages normative expression and discourages breaking boundaries. It truly discourages real feeling and art.

November 16, 2024

Links you should read – 2024-11-15

Filed under: Daily Links,Surveillance,Technology — Tags: , — DrMundane @ 3:02 am

To start out the roundup, Karl Bode at Techdirt on Canada’s new right-to-repair law. See also Doctorow on Pluralistic covering the same for some further explanation. Controlling our devices is the first step to controlling our data, and in an America that is growing more authoritarian one must protect themselves and their data. Right to repair also means a right to disassemble, understand, and verify. Only when we fully know our devices can we fully trust them.

Following up on that, a guide from WIRED on protecting your privacy. Small steps.

Back to government surveillance, with a 404 media piece on the use of location data by the government (warrant required? Unclear). Even taking the assumption that under current law a warrant is required, I imagine there will soon be a federal judiciary willing to chip away at the 4th amendment. How else will we find the (immigrants/trans people/journalists/assorted enemies within)? I worry that I put too fine a point on these concerns. But then again, I would prefer to be wrong and advancing security. A ‘hope to be wrong and plan to be right’ kind of deal.

Hopping over to the archive of links on pinboard for something fun (but a long read): Closing Arguments of Mr. Rothschild in Kitzmiller v. Dover. My favorite quote?

His explanation that he misspoke the word “creationism” because it was being used in news articles, which he had just previously testified he had not read, was, frankly, incredible. We all watched that tape. And per Mr. Linker’s suggestion that all the kids like movies, I’d like to show it one more time. (Tape played.) That was no deer in the headlights. That deer was wearing shades and was totally at ease.

What a line. *chef’s kiss*

November 13, 2024

Links You Should Read – 2024-11-12

Filed under: Daily Links,Gender,Surveillance,Technology — Tags: , , , , , — DrMundane @ 12:59 am

Starting out with one from Wired, on facial recognition. Never forget that the terrain has changed for protest and online. I would certainly recommend anyone take steps to protect themselves moving forward. I am interested in the intersection of ‘dazzle makeup’, gender classification, and facial recognition in general. Genderfuck = AI protection? One can only hope.

Bonus link? The dazzle makeup one above. That machine-vision.no website seems neat, looking at how we conceptualize AI and machine vision etc. in media can tell us a lot about our worries and fears as a society. Back on course a little, dazzle makeup is one of those things I really wish were more true than it is. You can trick the AI, sure, but any human will pick out your face and track you that way. You become a person of interest real quick when you hide in that way. You need to blend, I think. Still, a person can dream.

Next up, one on pornography from techdirt. In a project 2025, Christian nationalist country, ‘pornography’ will not be limited to actual materials for sexual pleasure. It will be used as a label to restrict and remove LGBTQ+ material. It is literally the Moms for Liberty playbook, now coming to a federal government near you.

Wrapping up my links, read and subscribe to Aftermath!

November 5, 2024

“Gender Classifier” redux

Filed under: Gender,Technology — Tags: , , , , — DrMundane @ 12:49 am

As a follow up to my previous post, I also noticed while looking through the one AI companies page that they had a “gender classifier” for text too.

I had wanted to test their classifier, but was not about to upload my face or anyone else’s to some fucking AI company. But text? I can live with uploading a little of my own text (as a treat).

I started out with some fiction, something with dialog and some action interspersed. In truth it was erotica, but I skipped any descriptive action of actual intercourse. Honestly I was just interested what it would make of it. The result? “Female 70.71% confident”.

Ok, what if I swing the other direction, nonfiction? An excerpt of a blog post from this site or two. Say my last post (linked above). “Male 60.22% confident”. Trying another post I get “Male 67.71% confident”.

The straight ahead, non fiction, or opinion type of work seems to get the male classification. An artifact, I assume, of the gender normative source material and of the patriarchy in publishing, or of the biases of the humans classifying the dataset.

Trying one last example, this time an excerpt from my private writings (my diary/commonplace book takes the form of notes in the apple notes app a lot of times). It certainly leans more on my feelings and such, and not on straight ahead opinion and references. Results for one entry? “Female 66.21% confident”

Now I must admit the whole experiment here gave me some ill feelings, to say the least. Being classified did not sit right with me at all. It feels as though your self is being crushed back into one label or the other and that you have been reduced. But one more thought grabbed my interest.

What would it classify this writing as?

It is like gazing into a mirror, no, as if you can gaze through the eyes of another. How does anyone really take my work? What voice do they hear? I know, in my heart of hearts, that I should not care about such things. Even if I do, the AI will not be the eyes of another human. It is a statistical daydream.

And besides I wrote the word patriarchy (now twice), so I imagine that should add 20% Female points right there.

Nevertheless, I put everything from this sentence to the top into the classifier.

Results: “Female 52.23% confident”.

So a toss up. But I had to know, what if I replaced patriarchy with, say, normativity? Does it make a difference?

I literally clapped my hands and laughed. “Male 50.42% confident”. So it adds exactly 2.65% “female-ness” to say patriarchy twice. lol.

fuck these people and their products. never let them take root and give them no quarter, no serious consideration.

P.S.

I thought suddenly, “what’s 100% confident look like? What could one write to make it sure?”.

How about “I am a woman/I am a man”? Very high confidence there.

Results: “I am a man” : “Male 55.23% confident”.

”I am a woman”: “Female 82.15% confident”.

I had a couple of other thoughts:

“I am a nonbinary”: (I kept the grammar similar in the interests of fairness) “Female 83.79% confident”

“I am a trans man”: “Male 54.96% confident”

“I am a trans woman”: “Female 84.79% confident”

Of course it isn’t designed to interpret anyone actually stating their gender, but still. I hope it shows the hollow nature of the technology. How absolutely unfit for purpose it is. Let alone how its purpose is needless et cetera I’m looping here.

And I just had fun fucking around with it. Costs them money too, I imagine, to run the queries.

Powered by WordPress