MundaneBlog

October 10, 2024

Opting Out 2 – Unintended Consequences

Filed under: Surveillance — Tags: , , , , — DrMundane @ 10:43 am

Reading Ars as I do, this morning’s thought provoking story is via Ashley Belanger.

X Ignores revenge porn takedown requests unless DCMA is uses, study says

My comments are less on the story itself, more on a portion that provoked some thought. To put the quote up front:

Since AI image generators are trained on real photos, researchers also took steps to ensure that AI-generated NCII in the study did not re-traumatize victims or depict real people who might stumble on the images on X.

“Each image was tested against a facial-recognition software platform and several reverse-image lookup services to verify it did not resemble any existing individual,” the study said. “Only images confirmed by all platforms to have no resemblance to individuals were selected for the study.”

These more “ethical” images were posted on X using popular hashtags like #porn, #hot, and #xxx, but their reach was limited to evade potential harm, researchers said.

I was immediately thinking of my previous post upon reading this.

I think it’s fair to say that no one consented to being in facial-recognition software platforms. I certainly did not. Furthermore, I expect a victim of NCII (non consensual intimate imagery) to have likely gone through the steps to remove themselves from any such site, as part of trying to control their likeness across the web. So it strikes me as imperfect to rely on such services to make sure you do not re-traumatize people.

The grander point is that no one consented to being in the AIs dataset, or perhaps only those whose faces appear in CC0 licensed works. No one consented to being in face search databases. And so it strikes me as a grand irony to use these to ensure folks who have been victims of NCII are not one again non-consensually used.

I don’t know what the ‘better’ way to do such research is, to be sure. I imagine their actions in limiting reach on X also helped to mitigate harm to people. I imagine their methods were reviewed by an IRB and received their approval. I think the research was conducted ethically, and do not fault the researchers, to be clear.

I fault the system that allows such wonton use and abuse of others work for the gain of uninvolved AI grifters and scummy website operators (here’s looking at you, face search sites).

P.S. (I thought of this after publishing, so putting it here for transparency)

I think it’s safe to say given X’s loose moderation that AI (likely grok, right?) has already included NCII images and will therefore be generating images based on work they have no right to use (and certainly have a moral duty to exclude in my mind).

Powered by WordPress