Starting out with one from Wired, on facial recognition. Never forget that the terrain has changed for protest and online. I would certainly recommend anyone take steps to protect themselves moving forward. I am interested in the intersection of ‘dazzle makeup’, gender classification, and facial recognition in general. Genderfuck = AI protection? One can only hope.
Bonus link? The dazzle makeup one above. That machine-vision.no website seems neat, looking at how we conceptualize AI and machine vision etc. in media can tell us a lot about our worries and fears as a society. Back on course a little, dazzle makeup is one of those things I really wish were more true than it is. You can trick the AI, sure, but any human will pick out your face and track you that way. You become a person of interest real quick when you hide in that way. You need to blend, I think. Still, a person can dream.
Next up, one on pornography from techdirt. In a project 2025, Christian nationalist country, ‘pornography’ will not be limited to actual materials for sexual pleasure. It will be used as a label to restrict and remove LGBTQ+ material. It is literally the Moms for Liberty playbook, now coming to a federal government near you.
Wrapping up my links, read and subscribe to Aftermath!
Amicus Brief in Parents Protecting Our Children, UA v. Eau Claire Area School District joined by 16 states. “Parents Rights” (to be transphobic, or to control their children). I may write more on this.
see also: gender classifier. I can report from experience how the algorithms they develop seem to lean into pushing content once they have decided you are a {thing} that gets engagement. I can also report that it’s uncomfortable when they get it wrong and keep pushing.
As Goode says:
“…In both cases, I’m supposed to tell the algorithms who I am. I’m supposed to do the work. I’m supposed to swipe more. I’ll be so much better off if I do. And so will they.”
We loose when we don’t let the algorithms know who we are, but we sure as shit also loose when we do. A double bind, right?
Reading the always wonderful “Pivot to AI” by Amy Castor and David Gerard, and they link to a great 2019 piece by Os Keyes, “The Body Instrumental” which was new to me and enjoyable. Well, enjoyable in that particular way that any sufficiently prescient and worrying thing can be enjoyable. I have been thinking briefly as of late on heteronormativity, so both articles were a great coincidence.
I can’t restate any point not already sufficiently covered by the two articles above, but it really does strike me that any such “gender determining” (perhaps really “sex determining” in the end is their goal, reflecting the binary and exclusive nature of sex and gender for them, not that either is so binary as they think) AI will be inescapably heteronormative (perhaps “gender normative”, as I am speaking mostly in the gender, expression, and such things realm, not in the relational sense, although I take the term to apply to both. I can not claim to be an experienced scholar of gender, so forgive me if my terminology is incorrect. I was just reading Sex in Public, so, like, 1998?. Still very much a relevant work in my mind, but my cognition is biased towards that which I can remember in the moment).
The training data is classified first by humans, who will have to fit each photo into a binary category, man or woman. Most of the data will likely be of people who “pass” or perform gender in the correct way, simply owing to the dominance of such images in the training data. Movies, photos, public domain images, et cetera. Simply by volume alone the normative wins out, and therefore any such AI will be biased in its favor. It will be biased to fit people into these categories.
Turning to prognostication, who will be allowed to opt out? To gate access to a room or facility behind such an AI means that the non-normative, the queer, will be penalized. Even if one is notionally allowed to opt out, the process of doing so may very well lead to further stigmatization simply by virtue of being the different one.
As Keyes states: “We should focus on delegitimizing the technology altogether, ensuring it never gets integrated into society, and that facial recognition as a whole (with its many, many inherent problems) goes the same way.”
I could not have said it better or any earlier. You simply must read the whole article, as the portion on how the AI will reshape gender in its image is brilliant and gets to the very heart of not just the AI problem, but of problems of gender in society more broadly.
My comments are less on the story itself, more on a portion that provoked some thought. To put the quote up front:
Since AI image generators are trained on real photos, researchers also took steps to ensure that AI-generated NCII in the study did not re-traumatize victims or depict real people who might stumble on the images on X.
“Each image was tested against a facial-recognition software platform and several reverse-image lookup services to verify it did not resemble any existing individual,” the study said. “Only images confirmed by all platforms to have no resemblance to individuals were selected for the study.”
These more “ethical” images were posted on X using popular hashtags like #porn, #hot, and #xxx, but their reach was limited to evade potential harm, researchers said.
I was immediately thinking of my previous post upon reading this.
I think it’s fair to say that no one consented to being in facial-recognition software platforms. I certainly did not. Furthermore, I expect a victim of NCII (non consensual intimate imagery) to have likely gone through the steps to remove themselves from any such site, as part of trying to control their likeness across the web. So it strikes me as imperfect to rely on such services to make sure you do not re-traumatize people.
The grander point is that no one consented to being in the AIs dataset, or perhaps only those whose faces appear in CC0 licensed works. No one consented to being in face search databases. And so it strikes me as a grand irony to use these to ensure folks who have been victims of NCII are not one again non-consensually used.
I don’t know what the ‘better’ way to do such research is, to be sure. I imagine their actions in limiting reach on X also helped to mitigate harm to people. I imagine their methods were reviewed by an IRB and received their approval. I think the research was conducted ethically, and do not fault the researchers, to be clear.
I fault the system that allows such wonton use and abuse of others work for the gain of uninvolved AI grifters and scummy website operators (here’s looking at you, face search sites).
P.S. (I thought of this after publishing, so putting it here for transparency)
I think it’s safe to say given X’s loose moderation that AI (likely grok, right?) has already included NCII images and will therefore be generating images based on work they have no right to use (and certainly have a moral duty to exclude in my mind).
Reading Ars Technica this morning, an article on doxing everyone (everyone!) with Meta’s (née facebook) smart glasses. The article is of great import, but I headed over to the linked paper that detailed the process. The authors, AnhPhu Nguyen and Caine Ardayfio, were kind enough to provide links giving instructions on removing your information from the databases linked. Although I imagine it becomes a war of attrition as they scrape and add your data back.
Naturally I followed these links to get an idea of how one would go about removing their data from these services. I was particularly interested with the language on the one service, FaceCheck.id.
To quote the part that stuck out to me:
We reserve the right not to remove photos of child sex offenders, convicted rapists, and other violent criminals who could pose physical harm to innocent people.
Now this is terribly interesting to me. It makes clear the difference between what they purport to sell, or be, or give, and what they actually speaking are. In fact, the contrast is enhanced if only you read down the page a little:
DISCLAIMER: For educational purposes only. All images are indexed from public, readily available web pages only.
Ah, so it’s for educational purposes, but they reserve the right to make sure that some people remain visible, ostensibly in the interests of ‘public safety’. They, of course, are not the courts. They have no information that allows them to assess who presents a risk to others, and even if they did a private entity has no right to do so. Is this valuable in actually protecting people? I am not sold on that. If someone poses a danger then by all means, let the court’s sentencing and probation reflect that.
What is the education here? Should we profile based on those who have been caught? What have we learned through this venture? Surely such a high minded educational site will have peer reviewed research that is advanced through this educational database.
What they do have, what they sell, are the lurid possibilities. Sell the darkness and sell knowing. How can you know if someone is a child sex offender? How can you know if your nice neighbor once beat a man? What if? What if? What if?
You can know who’s a rapist or a ‘violent criminal’. You know your child will be safe, since you check every person they meet. Safety is for sale. Never mind that this likely isn’t the best way to protect children. Never mind the fact they served their sentence, they were violent criminals once. Never mind the racial bias of the justice system. Never mind a case of mistaken identity on these services’ part.
They veil these much baser interests, the interest in profiting off of speculation; sowing distrust and fear, in the cloak of public safety and moral responsibility. Furthermore, the entire public is caught in their dragnet.