MundaneBlog

November 16, 2024

Links you should read – 2024-11-15

Filed under: Daily Links,Surveillance,Technology — Tags: , — DrMundane @ 3:02 am

To start out the roundup, Karl Bode at Techdirt on Canada’s new right-to-repair law. See also Doctorow on Pluralistic covering the same for some further explanation. Controlling our devices is the first step to controlling our data, and in an America that is growing more authoritarian one must protect themselves and their data. Right to repair also means a right to disassemble, understand, and verify. Only when we fully know our devices can we fully trust them.

Following up on that, a guide from WIRED on protecting your privacy. Small steps.

Back to government surveillance, with a 404 media piece on the use of location data by the government (warrant required? Unclear). Even taking the assumption that under current law a warrant is required, I imagine there will soon be a federal judiciary willing to chip away at the 4th amendment. How else will we find the (immigrants/trans people/journalists/assorted enemies within)? I worry that I put too fine a point on these concerns. But then again, I would prefer to be wrong and advancing security. A ‘hope to be wrong and plan to be right’ kind of deal.

Hopping over to the archive of links on pinboard for something fun (but a long read): Closing Arguments of Mr. Rothschild in Kitzmiller v. Dover. My favorite quote?

His explanation that he misspoke the word “creationism” because it was being used in news articles, which he had just previously testified he had not read, was, frankly, incredible. We all watched that tape. And per Mr. Linker’s suggestion that all the kids like movies, I’d like to show it one more time. (Tape played.) That was no deer in the headlights. That deer was wearing shades and was totally at ease.

What a line. *chef’s kiss*

November 13, 2024

Links You Should Read – 2024-11-12

Filed under: Daily Links,Gender,Surveillance,Technology — Tags: , , , , , — DrMundane @ 12:59 am

Starting out with one from Wired, on facial recognition. Never forget that the terrain has changed for protest and online. I would certainly recommend anyone take steps to protect themselves moving forward. I am interested in the intersection of ‘dazzle makeup’, gender classification, and facial recognition in general. Genderfuck = AI protection? One can only hope.

Bonus link? The dazzle makeup one above. That machine-vision.no website seems neat, looking at how we conceptualize AI and machine vision etc. in media can tell us a lot about our worries and fears as a society. Back on course a little, dazzle makeup is one of those things I really wish were more true than it is. You can trick the AI, sure, but any human will pick out your face and track you that way. You become a person of interest real quick when you hide in that way. You need to blend, I think. Still, a person can dream.

Next up, one on pornography from techdirt. In a project 2025, Christian nationalist country, ‘pornography’ will not be limited to actual materials for sexual pleasure. It will be used as a label to restrict and remove LGBTQ+ material. It is literally the Moms for Liberty playbook, now coming to a federal government near you.

Wrapping up my links, read and subscribe to Aftermath!

October 10, 2024

Opting Out 2 – Unintended Consequences

Filed under: Surveillance — Tags: , , , , — DrMundane @ 10:43 am

Reading Ars as I do, this morning’s thought provoking story is via Ashley Belanger.

X Ignores revenge porn takedown requests unless DCMA is uses, study says

My comments are less on the story itself, more on a portion that provoked some thought. To put the quote up front:

Since AI image generators are trained on real photos, researchers also took steps to ensure that AI-generated NCII in the study did not re-traumatize victims or depict real people who might stumble on the images on X.

“Each image was tested against a facial-recognition software platform and several reverse-image lookup services to verify it did not resemble any existing individual,” the study said. “Only images confirmed by all platforms to have no resemblance to individuals were selected for the study.”

These more “ethical” images were posted on X using popular hashtags like #porn, #hot, and #xxx, but their reach was limited to evade potential harm, researchers said.

I was immediately thinking of my previous post upon reading this.

I think it’s fair to say that no one consented to being in facial-recognition software platforms. I certainly did not. Furthermore, I expect a victim of NCII (non consensual intimate imagery) to have likely gone through the steps to remove themselves from any such site, as part of trying to control their likeness across the web. So it strikes me as imperfect to rely on such services to make sure you do not re-traumatize people.

The grander point is that no one consented to being in the AIs dataset, or perhaps only those whose faces appear in CC0 licensed works. No one consented to being in face search databases. And so it strikes me as a grand irony to use these to ensure folks who have been victims of NCII are not one again non-consensually used.

I don’t know what the ‘better’ way to do such research is, to be sure. I imagine their actions in limiting reach on X also helped to mitigate harm to people. I imagine their methods were reviewed by an IRB and received their approval. I think the research was conducted ethically, and do not fault the researchers, to be clear.

I fault the system that allows such wonton use and abuse of others work for the gain of uninvolved AI grifters and scummy website operators (here’s looking at you, face search sites).

P.S. (I thought of this after publishing, so putting it here for transparency)

I think it’s safe to say given X’s loose moderation that AI (likely grok, right?) has already included NCII images and will therefore be generating images based on work they have no right to use (and certainly have a moral duty to exclude in my mind).

October 5, 2024

Facial Recognition – Who is allowed to Opt Out?

Filed under: Surveillance — Tags: , , , — DrMundane @ 1:41 am

Reading Ars Technica this morning, an article on doxing everyone (everyone!) with Meta’s (née facebook) smart glasses. The article is of great import, but I headed over to the linked paper that detailed the process. The authors, AnhPhu Nguyen and Caine Ardayfio, were kind enough to provide links giving instructions on removing your information from the databases linked. Although I imagine it becomes a war of attrition as they scrape and add your data back.

Naturally I followed these links to get an idea of how one would go about removing their data from these services. I was particularly interested with the language on the one service, FaceCheck.id.

To quote the part that stuck out to me:

We reserve the right not to remove photos of child sex offenders, convicted rapists, and other violent criminals who could pose physical harm to innocent people.

Now this is terribly interesting to me. It makes clear the difference between what they purport to sell, or be, or give, and what they actually speaking are. In fact, the contrast is enhanced if only you read down the page a little:

DISCLAIMER: For educational purposes only. All images are indexed from public, readily available web pages only.

Ah, so it’s for educational purposes, but they reserve the right to make sure that some people remain visible, ostensibly in the interests of ‘public safety’. They, of course, are not the courts. They have no information that allows them to assess who presents a risk to others, and even if they did a private entity has no right to do so. Is this valuable in actually protecting people? I am not sold on that. If someone poses a danger then by all means, let the court’s sentencing and probation reflect that.

What is the education here? Should we profile based on those who have been caught? What have we learned through this venture? Surely such a high minded educational site will have peer reviewed research that is advanced through this educational database.

What they do have, what they sell, are the lurid possibilities. Sell the darkness and sell knowing. How can you know if someone is a child sex offender? How can you know if your nice neighbor once beat a man? What if? What if? What if?

You can know who’s a rapist or a ‘violent criminal’. You know your child will be safe, since you check every person they meet. Safety is for sale. Never mind that this likely isn’t the best way to protect children. Never mind the fact they served their sentence, they were violent criminals once. Never mind the racial bias of the justice system. Never mind a case of mistaken identity on these services’ part.

They veil these much baser interests, the interest in profiting off of speculation; sowing distrust and fear, in the cloak of public safety and moral responsibility. Furthermore, the entire public is caught in their dragnet.

I take it as a solid assumption that the “shitty tech adaption curve” is true.

Here is the shitty tech. Who isn’t allowed to opt out now?

Who is next?

Powered by WordPress