Did someone not know this like, pretty much from day one?
Not the idiot executives that blew all their budget on AI and made up for it with mass layoffs - the people interested in it. Was that not clear that there was no “reasoning” going on?
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Did someone not know this like, pretty much from day one?
Not the idiot executives that blew all their budget on AI and made up for it with mass layoffs - the people interested in it. Was that not clear that there was no “reasoning” going on?
Seriously, I've seen 100x more headlines like this than people claiming LLMs can reason. Either they don't understand, or think we don't understand what "artificial" means.
Well, two responses I have seen to the claim that LLMs are not reasoning are:
So I think this research is useful as a response to these, although I think "fuck off, promptfondler" is pretty good too.
there’s a lot of people (especially here, but not only here) who have had the insight to see this being the case, but there’s also been a lot of boosters and promptfondlers (ie. people with a vested interest) putting out claims that their precious word vomit machines are actually thinking
so while this may confirm a known doubt, rigorous scientific testing (and disproving) of the claims is nonetheless a good thing
We suspect this research is likely part of why Apple pulled out of the recent OpenAI funding round at the last minute.
Perhaps the AI bros “think” by guessing the next word and hoping it’s convincing. They certainly argue like it.
🔥