this post was submitted on 13 Aug 2023
895 points (97.8% liked)

Technology

59575 readers
3259 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

College professors are going back to paper exams and handwritten essays to fight students using ChatGPT::The growing number of students using the AI program ChatGPT as a shortcut in their coursework has led some college professors to reconsider their lesson plans for the upcoming fall semester.

you are viewing a single comment's thread
view the rest of the comments
[–] HexesofVexes@lemmy.world 10 points 1 year ago (2 children)

I'll field this because it does raise some good points:

It all boils down to how much you trust what is essentially matrix multiplication, trained on the internet, with some very arbitrarily chosen initial conditions. Early on when AI started cropping up in the news, I tested the validity of answers given:

  1. For topics aimed at 10--18 year olds, it does pretty well. It's answers are generic, and it makes mistakes every now and then.

  2. For 1st--3rd year degree, it really starts to make dangerous errors, but it's a good tool to summarise materials from textbooks.

  3. Masters+, it spews (very convincing) bollocks most of the time.

Recognising the mistakes in (1) requires checking it against the course notes, something most students manage. Recognising the mistakes in (2) is often something a stronger student can manage, but not a weaker one. As for (3), you are going to need to be an expert to recognise the mistakes (it literally misinterpreted my own work at me at one point).

The irony is, education in its current format is already working with AI, it's teaching people how to correct the errors given. Theming assessment around an AI is a great idea, until you have to create one (the very fact it is moving fast means that everything you teach about it ends up out of date by the time a student needs it for work).

However, I do agree that education as a whole needs overhauling. How to do this: maybe fund it a bit better so we're able to hire folks to help develop better courses - at the moment every "great course" you've ever taken was paid for in blood (i.e. 50 hour weeks teaching/marking/prepping/meeting arbitrary research requirement).

[–] zephyreks@lemmy.ca -2 points 1 year ago (2 children)

On the other hand, what if the problem is simply one that's no longer important for most people? Isn't technological advancement supposed to introduce abstraction that people can develop on?

[–] average650@lemmy.world 6 points 1 year ago

The point is the students can't get to the higher level concepts if they're just regurgitating from what chatgpt says.

[–] MBM@lemmings.world 1 points 1 year ago

If you never learn how to do the basics without ChatGPT, it's a big jump to figure out the advanced topics where ChatGPT no longer helps you

[–] ArmokGoB@lemmy.world -2 points 1 year ago (1 children)

(1) seems to be a legitimate problem. (2) is just filtering the stronger students from the weaker ones with extra steps. (3) isn't an issue unless a professor teaching graduate classes can't tell BS from truth in their own field. If that's the case, I'd call the professor's lack of knowledge a larger issue than the student's.

[–] jarfil@lemmy.world 3 points 1 year ago (1 children)

You may not know this, but "Masters" is about uncovering knowledge nobody had before, not even the professor. That's where peer reviews and shit like LK-99 happen.

[–] Womble@lemmy.world 2 points 1 year ago

It really isn't. You don't start doing properly original research until a year or two into a PhD. At best a masters project is going to be doing something like taking an existing model and applying it to an adjacent topic to the one it was designed for.