this post was submitted on 02 Apr 2024
60 points (94.1% liked)
BecomeMe
814 readers
1 users here now
Social Experiment. Become Me. What I see, you see.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is the best summary I could come up with:
Right after the blockbuster release of GPT-4, the latest artificial intelligence model from OpenAI and one of the most advanced in existence, the language of scientific research began to mutate.
Source: “Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews”
If this makes you uncomfortable — especially given A.I.’s current unreliability — or if you think that maybe it shouldn’t be A.I.s reviewing science but the scientists themselves, those feelings highlight the paradox at the core of this technology: It’s unclear what the ethical line is between scam and regular usage.
Isn’t it possible that human culture contains within it cognitive micronutrients — things like cohesive sentences, narrations and character continuity — that developing brains need?
companies are refusing to pursue advanced ways to identify A.I.’s handiwork — which they could do by adding subtle statistical patterns hidden in word use or in the pixels of images.
Similarly, right now teachers across the nation have created home-brewed output-side detection methods, like adding hidden requests for patterns of word use to essay prompts that appear only when copied and pasted.
The original article contains 1,636 words, the summary contains 188 words. Saved 89%. I'm a bot and I'm open source!
Ironic
Hey, at least nobody has been replaced with the bots that convert YouTube URLs to piped.video or summarize articles. Ironic, but this specific bot is helpful and working right, so... Good bot. Best friend.
GPT, however, is purely made to bust unions and I hope to see Big Tech buried alive for that.