• YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 天前

    I’d bet dollars to donuts that the internal documents from OpenAI on this marketing push are pretty clear about the real goal here. Plagiarism is one of the most visible and easiest to understand problems enabled by GenAI. My wife is getting an online degree and it’s incredibly obvious how many other students are just shamelessly dumping the assignment into chatGPT. So they need to reframe it as part of a wider conversation about GenAI and education, which is where you get the nonsense buzzword courses that don’t attempt to engage with even the most obvious problems.

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 天前

    the linked Buttondown article deserves highlighting because, as always, Emily M Bender knows what’s up:

    If we value information literacy and cultivating in students the ability to think critically about information sources and how they relate to each other, we shouldn’t use systems that not only rupture the relationship between reader and information source, but also present a worldview where there are simple, authoritative answers to questions, and all we have to do is to just ask ChatGPT for them.

    (and I really should start listening to Mystery AI Hype Theater 3000 soon)

    also, this stood out, from the OpenAI/Common Sense Media (ugh) presentation:

    As a responsible user, it is essential that you check and evaluate the accuracy of the outputs of any generative AI tool before you share it with your colleagues, parents and caregivers, and students. That includes any seemingly factual information, links, references, and citations.

    this is such a fucked framing of the dangers of informational bias, algorithmic racism, and the laundering of fabricated data through the false authority of an LLM. framing it as an issue where the responsible party is the non-expert user is a lot like saying “of course you can diagnose your own ocular damage, just use your eyes”. it’s very easy to perceive the AI as unbiased in situations where the bias agrees with your own, and that is incredibly dangerous to marginalized students. and as always, it’s gross how targeted this is: educators are used to being the responsible ones in the room, and this might feel like yet another responsibility to take on — but that’s not a reasonable way to handle LLMs as a source of unending bullshit.