the linked Buttondown article deserves highlighting because, as always, Emily M Bender knows what’s up:
If we value information literacy and cultivating in students the ability to think critically about information sources and how they relate to each other, we shouldn't use systems that not only rupture the relationship between reader and information source, but also present a worldview where there are simple, authoritative answers to questions, and all we have to do is to just ask ChatGPT for them.
(and I really should start listening to Mystery AI Hype Theater 3000 soon)
also, this stood out, from the OpenAI/Common Sense Media (ugh) presentation:
As a responsible user, it is essential that you check and evaluate the accuracy of the outputs of any generative AI tool before you share it with your colleagues, parents and caregivers, and students. That includes any seemingly factual information, links, references, and citations.
this is such a fucked framing of the dangers of informational bias, algorithmic racism, and the laundering of fabricated data through the false authority of an LLM. framing it as an issue where the responsible party is the non-expert user is a lot like saying “of course you can diagnose your own ocular damage, just use your eyes”. it’s very easy to perceive the AI as unbiased in situations where the bias agrees with your own, and that is incredibly dangerous to marginalized students. and as always, it’s gross how targeted this is: educators are used to being the responsible ones in the room, and this might feel like yet another responsibility to take on — but that’s not a reasonable way to handle LLMs as a source of unending bullshit.