MundaneBlog

December 3, 2024

Links you Should Read 2024-12-2

Filed under: AI,Consumerism,Daily Links,Technology — Tags: , — DrMundane @ 12:08 am

Starting with one from Wired this morning.

Sonos appears poised to go down the enshittification rabbit hole, if their fortunes do not turn. A particularly galling quote:

And while the overall speakers per household were actually up to 3.08 from 3.05 last year, with a slowing new user base, how can Sonos continue to make money in what is looking to be a saturated market?

The answer is they can’t! We’ve sold enough fucking smart speakers! Please stop the planet is literally heating up. It’s just emblematic of the omnipresent drive to grow and profit and extract. Even the journalists writing the story take it for granted that this must happen, and will not spend a single line arguing that they actually don’t need to grow year over year. Maybe their business model wasn’t sustainable, and they should learn the hard way why unlimited growth never works. Instead, they will use their new subscription ready app to squeeze the people who have already bought in. They have altered the deal, pray they do not alter it further.

For the next story, Mike Masnick over at Techdirt going over how he actually uses AI. An old post, but it came to my attention again in his newer post.

I must say I am fairly convinced that AI could be useful as a writing assistant. It does make me want to try it. But then again I have enough trouble writing when it only comes down to getting motivation. Adding more steps to my process would undoubtably prevent me from finishing.

I did have an occasion at a dinner party recently to actually talk to someone in education about their usage of AI. I was, at first, taken aback that someone in my real life actually uses AI and they have positive things to say about it. Their argument does mainly revolve around the “time” factor. They are used to having a lot on their plate, and for them AI is useful for creating presentations, simplifying language (write this so a 4th grader can understand), and otherwise helping them create new material quickly.

I mainly listened and questioned on this occasion, and did not get into my more… animated feelings. Still, I actually do hope to get into more discussions on AI in real life, and hopefully I will have the presence of mind to argue my full position convincingly.

That’s it for today, enjoy.

November 5, 2024

“Gender Classifier” redux

Filed under: Gender,Technology — Tags: , , , , — DrMundane @ 12:49 am

As a follow up to my previous post, I also noticed while looking through the one AI companies page that they had a “gender classifier” for text too.

I had wanted to test their classifier, but was not about to upload my face or anyone else’s to some fucking AI company. But text? I can live with uploading a little of my own text (as a treat).

I started out with some fiction, something with dialog and some action interspersed. In truth it was erotica, but I skipped any descriptive action of actual intercourse. Honestly I was just interested what it would make of it. The result? “Female 70.71% confident”.

Ok, what if I swing the other direction, nonfiction? An excerpt of a blog post from this site or two. Say my last post (linked above). “Male 60.22% confident”. Trying another post I get “Male 67.71% confident”.

The straight ahead, non fiction, or opinion type of work seems to get the male classification. An artifact, I assume, of the gender normative source material and of the patriarchy in publishing, or of the biases of the humans classifying the dataset.

Trying one last example, this time an excerpt from my private writings (my diary/commonplace book takes the form of notes in the apple notes app a lot of times). It certainly leans more on my feelings and such, and not on straight ahead opinion and references. Results for one entry? “Female 66.21% confident”

Now I must admit the whole experiment here gave me some ill feelings, to say the least. Being classified did not sit right with me at all. It feels as though your self is being crushed back into one label or the other and that you have been reduced. But one more thought grabbed my interest.

What would it classify this writing as?

It is like gazing into a mirror, no, as if you can gaze through the eyes of another. How does anyone really take my work? What voice do they hear? I know, in my heart of hearts, that I should not care about such things. Even if I do, the AI will not be the eyes of another human. It is a statistical daydream.

And besides I wrote the word patriarchy (now twice), so I imagine that should add 20% Female points right there.

Nevertheless, I put everything from this sentence to the top into the classifier.

Results: “Female 52.23% confident”.

So a toss up. But I had to know, what if I replaced patriarchy with, say, normativity? Does it make a difference?

I literally clapped my hands and laughed. “Male 50.42% confident”. So it adds exactly 2.65% “female-ness” to say patriarchy twice. lol.

fuck these people and their products. never let them take root and give them no quarter, no serious consideration.

P.S.

I thought suddenly, “what’s 100% confident look like? What could one write to make it sure?”.

How about “I am a woman/I am a man”? Very high confidence there.

Results: “I am a man” : “Male 55.23% confident”.

”I am a woman”: “Female 82.15% confident”.

I had a couple of other thoughts:

“I am a nonbinary”: (I kept the grammar similar in the interests of fairness) “Female 83.79% confident”

“I am a trans man”: “Male 54.96% confident”

“I am a trans woman”: “Female 84.79% confident”

Of course it isn’t designed to interpret anyone actually stating their gender, but still. I hope it shows the hollow nature of the technology. How absolutely unfit for purpose it is. Let alone how its purpose is needless et cetera I’m looping here.

And I just had fun fucking around with it. Costs them money too, I imagine, to run the queries.

Powered by WordPress