MundaneBlog

November 16, 2024

Links you should read – 2024-11-15

Filed under: Daily Links,Surveillance,Technology — Tags: , — DrMundane @ 3:02 am

To start out the roundup, Karl Bode at Techdirt on Canada’s new right-to-repair law. See also Doctorow on Pluralistic covering the same for some further explanation. Controlling our devices is the first step to controlling our data, and in an America that is growing more authoritarian one must protect themselves and their data. Right to repair also means a right to disassemble, understand, and verify. Only when we fully know our devices can we fully trust them.

Following up on that, a guide from WIRED on protecting your privacy. Small steps.

Back to government surveillance, with a 404 media piece on the use of location data by the government (warrant required? Unclear). Even taking the assumption that under current law a warrant is required, I imagine there will soon be a federal judiciary willing to chip away at the 4th amendment. How else will we find the (immigrants/trans people/journalists/assorted enemies within)? I worry that I put too fine a point on these concerns. But then again, I would prefer to be wrong and advancing security. A ‘hope to be wrong and plan to be right’ kind of deal.

Hopping over to the archive of links on pinboard for something fun (but a long read): Closing Arguments of Mr. Rothschild in Kitzmiller v. Dover. My favorite quote?

His explanation that he misspoke the word “creationism” because it was being used in news articles, which he had just previously testified he had not read, was, frankly, incredible. We all watched that tape. And per Mr. Linker’s suggestion that all the kids like movies, I’d like to show it one more time. (Tape played.) That was no deer in the headlights. That deer was wearing shades and was totally at ease.

What a line. *chef’s kiss*

November 13, 2024

Links You Should Read – 2024-11-12

Filed under: Daily Links,Gender,Surveillance,Technology — Tags: , , , , , — DrMundane @ 12:59 am

Starting out with one from Wired, on facial recognition. Never forget that the terrain has changed for protest and online. I would certainly recommend anyone take steps to protect themselves moving forward. I am interested in the intersection of ‘dazzle makeup’, gender classification, and facial recognition in general. Genderfuck = AI protection? One can only hope.

Bonus link? The dazzle makeup one above. That machine-vision.no website seems neat, looking at how we conceptualize AI and machine vision etc. in media can tell us a lot about our worries and fears as a society. Back on course a little, dazzle makeup is one of those things I really wish were more true than it is. You can trick the AI, sure, but any human will pick out your face and track you that way. You become a person of interest real quick when you hide in that way. You need to blend, I think. Still, a person can dream.

Next up, one on pornography from techdirt. In a project 2025, Christian nationalist country, ‘pornography’ will not be limited to actual materials for sexual pleasure. It will be used as a label to restrict and remove LGBTQ+ material. It is literally the Moms for Liberty playbook, now coming to a federal government near you.

Wrapping up my links, read and subscribe to Aftermath!

November 5, 2024

“Gender Classifier” redux

Filed under: Gender,Technology — Tags: , , , , — DrMundane @ 12:49 am

As a follow up to my previous post, I also noticed while looking through the one AI companies page that they had a “gender classifier” for text too.

I had wanted to test their classifier, but was not about to upload my face or anyone else’s to some fucking AI company. But text? I can live with uploading a little of my own text (as a treat).

I started out with some fiction, something with dialog and some action interspersed. In truth it was erotica, but I skipped any descriptive action of actual intercourse. Honestly I was just interested what it would make of it. The result? “Female 70.71% confident”.

Ok, what if I swing the other direction, nonfiction? An excerpt of a blog post from this site or two. Say my last post (linked above). “Male 60.22% confident”. Trying another post I get “Male 67.71% confident”.

The straight ahead, non fiction, or opinion type of work seems to get the male classification. An artifact, I assume, of the gender normative source material and of the patriarchy in publishing, or of the biases of the humans classifying the dataset.

Trying one last example, this time an excerpt from my private writings (my diary/commonplace book takes the form of notes in the apple notes app a lot of times). It certainly leans more on my feelings and such, and not on straight ahead opinion and references. Results for one entry? “Female 66.21% confident”

Now I must admit the whole experiment here gave me some ill feelings, to say the least. Being classified did not sit right with me at all. It feels as though your self is being crushed back into one label or the other and that you have been reduced. But one more thought grabbed my interest.

What would it classify this writing as?

It is like gazing into a mirror, no, as if you can gaze through the eyes of another. How does anyone really take my work? What voice do they hear? I know, in my heart of hearts, that I should not care about such things. Even if I do, the AI will not be the eyes of another human. It is a statistical daydream.

And besides I wrote the word patriarchy (now twice), so I imagine that should add 20% Female points right there.

Nevertheless, I put everything from this sentence to the top into the classifier.

Results: “Female 52.23% confident”.

So a toss up. But I had to know, what if I replaced patriarchy with, say, normativity? Does it make a difference?

I literally clapped my hands and laughed. “Male 50.42% confident”. So it adds exactly 2.65% “female-ness” to say patriarchy twice. lol.

fuck these people and their products. never let them take root and give them no quarter, no serious consideration.

P.S.

I thought suddenly, “what’s 100% confident look like? What could one write to make it sure?”.

How about “I am a woman/I am a man”? Very high confidence there.

Results: “I am a man” : “Male 55.23% confident”.

”I am a woman”: “Female 82.15% confident”.

I had a couple of other thoughts:

“I am a nonbinary”: (I kept the grammar similar in the interests of fairness) “Female 83.79% confident”

“I am a trans man”: “Male 54.96% confident”

“I am a trans woman”: “Female 84.79% confident”

Of course it isn’t designed to interpret anyone actually stating their gender, but still. I hope it shows the hollow nature of the technology. How absolutely unfit for purpose it is. Let alone how its purpose is needless et cetera I’m looping here.

And I just had fun fucking around with it. Costs them money too, I imagine, to run the queries.

October 10, 2024

Opting Out 2 – Unintended Consequences

Filed under: Surveillance — Tags: , , , , — DrMundane @ 10:43 am

Reading Ars as I do, this morning’s thought provoking story is via Ashley Belanger.

X Ignores revenge porn takedown requests unless DCMA is uses, study says

My comments are less on the story itself, more on a portion that provoked some thought. To put the quote up front:

Since AI image generators are trained on real photos, researchers also took steps to ensure that AI-generated NCII in the study did not re-traumatize victims or depict real people who might stumble on the images on X.

“Each image was tested against a facial-recognition software platform and several reverse-image lookup services to verify it did not resemble any existing individual,” the study said. “Only images confirmed by all platforms to have no resemblance to individuals were selected for the study.”

These more “ethical” images were posted on X using popular hashtags like #porn, #hot, and #xxx, but their reach was limited to evade potential harm, researchers said.

I was immediately thinking of my previous post upon reading this.

I think it’s fair to say that no one consented to being in facial-recognition software platforms. I certainly did not. Furthermore, I expect a victim of NCII (non consensual intimate imagery) to have likely gone through the steps to remove themselves from any such site, as part of trying to control their likeness across the web. So it strikes me as imperfect to rely on such services to make sure you do not re-traumatize people.

The grander point is that no one consented to being in the AIs dataset, or perhaps only those whose faces appear in CC0 licensed works. No one consented to being in face search databases. And so it strikes me as a grand irony to use these to ensure folks who have been victims of NCII are not one again non-consensually used.

I don’t know what the ‘better’ way to do such research is, to be sure. I imagine their actions in limiting reach on X also helped to mitigate harm to people. I imagine their methods were reviewed by an IRB and received their approval. I think the research was conducted ethically, and do not fault the researchers, to be clear.

I fault the system that allows such wonton use and abuse of others work for the gain of uninvolved AI grifters and scummy website operators (here’s looking at you, face search sites).

P.S. (I thought of this after publishing, so putting it here for transparency)

I think it’s safe to say given X’s loose moderation that AI (likely grok, right?) has already included NCII images and will therefore be generating images based on work they have no right to use (and certainly have a moral duty to exclude in my mind).

October 5, 2024

Facial Recognition – Who is allowed to Opt Out?

Filed under: Surveillance — Tags: , , , — DrMundane @ 1:41 am

Reading Ars Technica this morning, an article on doxing everyone (everyone!) with Meta’s (née facebook) smart glasses. The article is of great import, but I headed over to the linked paper that detailed the process. The authors, AnhPhu Nguyen and Caine Ardayfio, were kind enough to provide links giving instructions on removing your information from the databases linked. Although I imagine it becomes a war of attrition as they scrape and add your data back.

Naturally I followed these links to get an idea of how one would go about removing their data from these services. I was particularly interested with the language on the one service, FaceCheck.id.

To quote the part that stuck out to me:

We reserve the right not to remove photos of child sex offenders, convicted rapists, and other violent criminals who could pose physical harm to innocent people.

Now this is terribly interesting to me. It makes clear the difference between what they purport to sell, or be, or give, and what they actually speaking are. In fact, the contrast is enhanced if only you read down the page a little:

DISCLAIMER: For educational purposes only. All images are indexed from public, readily available web pages only.

Ah, so it’s for educational purposes, but they reserve the right to make sure that some people remain visible, ostensibly in the interests of ‘public safety’. They, of course, are not the courts. They have no information that allows them to assess who presents a risk to others, and even if they did a private entity has no right to do so. Is this valuable in actually protecting people? I am not sold on that. If someone poses a danger then by all means, let the court’s sentencing and probation reflect that.

What is the education here? Should we profile based on those who have been caught? What have we learned through this venture? Surely such a high minded educational site will have peer reviewed research that is advanced through this educational database.

What they do have, what they sell, are the lurid possibilities. Sell the darkness and sell knowing. How can you know if someone is a child sex offender? How can you know if your nice neighbor once beat a man? What if? What if? What if?

You can know who’s a rapist or a ‘violent criminal’. You know your child will be safe, since you check every person they meet. Safety is for sale. Never mind that this likely isn’t the best way to protect children. Never mind the fact they served their sentence, they were violent criminals once. Never mind the racial bias of the justice system. Never mind a case of mistaken identity on these services’ part.

They veil these much baser interests, the interest in profiting off of speculation; sowing distrust and fear, in the cloak of public safety and moral responsibility. Furthermore, the entire public is caught in their dragnet.

I take it as a solid assumption that the “shitty tech adaption curve” is true.

Here is the shitty tech. Who isn’t allowed to opt out now?

Who is next?

Powered by WordPress