I have spent the last half-hour in the angry dome
"Raw, intellectual horsepower" means fucking an intellectual horse without a condom.
Oh, wait, that's rawdogging intellectual horsepower, my mistake.
So, the Wikipedia article about "prompt engineering" is pretty terrible. First source: OpenAI. Second: a blog. Third: OpenAI. Fourth: OpenAI's blog. ArXiv, arXiv, arXiv... 43 times. Hop on over to the Talk page, and we find this gem:
It is sometimes necessary to make assumptions to write an article (see WP:MNA).
Spoiler alert: that link doesn't justify anything. It basically advises against going off on tangents: There's no need to rehash the fact that evolution is a fact on every damn biology page. It does not say that Wikipedia should have an article on some creationist fantasy, like baraminology or flood geology, based entirely on creationist screeds that all cite each other.
Underlying original post: a Twitter bluecheck says,
Sometimes in the process of writing a good enough prompt for ChatGPT, I end up solving my own problem, without even needing to submit it.
Matt Novak on Bluesky screenshots this and comments,
AI folks have now discovered "thinking"
If you can't get through two short paragraphs without equating Stalinism and "social justice", you may be a cockwomble.
Welp, time to start the thread with fresh Awful for everyone to regret:
r/phenotypes
Here's a start:
Given their enormous environmental cost and their foundation upon exploited labor, justifying the use of Large Generative AI Models in telecommunications is an uphill task. Since their output is, in the technical sense of the term, bullshit, climbing that hill has no merit.
shot:
chaser: