MundaneBlog

December 3, 2024

Links you Should Read 2024-12-2

Filed under: AI,Consumerism,Daily Links,Technology — Tags: , — DrMundane @ 12:08 am

Starting with one from Wired this morning.

Sonos appears poised to go down the enshittification rabbit hole, if their fortunes do not turn. A particularly galling quote:

And while the overall speakers per household were actually up to 3.08 from 3.05 last year, with a slowing new user base, how can Sonos continue to make money in what is looking to be a saturated market?

The answer is they can’t! We’ve sold enough fucking smart speakers! Please stop the planet is literally heating up. It’s just emblematic of the omnipresent drive to grow and profit and extract. Even the journalists writing the story take it for granted that this must happen, and will not spend a single line arguing that they actually don’t need to grow year over year. Maybe their business model wasn’t sustainable, and they should learn the hard way why unlimited growth never works. Instead, they will use their new subscription ready app to squeeze the people who have already bought in. They have altered the deal, pray they do not alter it further.

For the next story, Mike Masnick over at Techdirt going over how he actually uses AI. An old post, but it came to my attention again in his newer post.

I must say I am fairly convinced that AI could be useful as a writing assistant. It does make me want to try it. But then again I have enough trouble writing when it only comes down to getting motivation. Adding more steps to my process would undoubtably prevent me from finishing.

I did have an occasion at a dinner party recently to actually talk to someone in education about their usage of AI. I was, at first, taken aback that someone in my real life actually uses AI and they have positive things to say about it. Their argument does mainly revolve around the “time” factor. They are used to having a lot on their plate, and for them AI is useful for creating presentations, simplifying language (write this so a 4th grader can understand), and otherwise helping them create new material quickly.

I mainly listened and questioned on this occasion, and did not get into my more… animated feelings. Still, I actually do hope to get into more discussions on AI in real life, and hopefully I will have the presence of mind to argue my full position convincingly.

That’s it for today, enjoy.

November 24, 2024

ChatGPT in the classroom – one thought experiment

Filed under: AI,Technology — Tags: , , — DrMundane @ 2:08 pm

Reading Mystery AI Hype Theatre 3000 this morning, on ChatGPT having no place in the classroom, and I tried out a little thought experiment. It goes something like this:

Let us assume we are in, say, an AP Literature class. I imagine the hardest part of grading such a class in reading and scoring several essays for each student over the year. I know for sure my terrible longhand cursive was a painful thing to read for my poor teacher. But what about an AI? Can the AI (unable to come to any factual conclusion or reason, just statistically generate) be used to speed this up?

I think the major problem is the inability to actually understand the students writing, but let us take for granted that improvements to large language models will somehow be able to overcome this fact with sufficient data and computation. That seems to be the claim of Altman and the others, but to be clear, they will not.

We know that currently these LLMs scrape data from the public internet, so can safely assume that they do have a robust source of information on AP Literature questions, books, themes, etc., from forums and other sources seeking to help students. Much of the writing will be done by students themselves, so another bonus is that it is writing representative of the population. So far that seems reasonable.

But there is already one problem. The writing is not representative of the whole population of AP Literature students, only those with access to these online resources and/or willingness to engage with them. I was a student who would never workshop their ideas or practice their writing online. I was simply too private for that sort of thing. The dataset certainly would never include my writing.

I would therefore argue that any such LLM grading students work would be inherently biased against students whose writing does not line up with the LLMs source material. It knows which essays deserve a 5 and which deserve a 1 based on it’s dataset. Based on statistics. It does not understand the rubric or intent, and therefore is unable to rate new (to it) but correct writing in a fair manner. You will therefore teach students to write like the corpus of text ingested by the model, not to find their voice and style. You teach them to look at the prompt and come up with the ‘correct’ answer, not an answer that necessarily comes from their own experience and understanding of the literature in question.

I think this alone makes the use of AI impossible in the classroom, owning to the discriminatory potential.

Another example that springs to mind: what about queer analysis? What about LGBTQ+ students? Will their viewpoints and experiences be reflected in the corpus? I doubt this highly. This means any such student may be able to write a brilliant analysis of a book through a queer lens and it will matter not, because statistically it doesn’t match what a ‘5’ is to look like. It uses all sorts of words that probably aren’t associated with ‘5’ essays. The LLM may even deem them ‘profane’. It therefore is not a ‘5’, in the view of the LLM.

I think these two thought experiments illustrate why I believe, beyond all the technical problems and overselling, that even a GPT that lives up to the hype and can be made factually correct will never be suitable for any evaluative work. Used to such an end, the AI encourages normative expression and discourages breaking boundaries. It truly discourages real feeling and art.

Powered by WordPress