Financial Times headline, Thursday 11 April: “OpenAI and Meta ready new AI models capable of ‘reasoning’”. Huge if true! [FT, archive]
This is an awesome story that the FT somehow ran without an editor reading it from the beginning through to the end. You can watch as the splashy headline claim slowly decays to nothing:
other than the exploration of how and why this shit’s being treated like magic for financial and, ah, religious gain, I think this is my favorite part of the article. it’s like an immunization against AI puff pieces — here’s some easy stuff the reader can look for next time the AI companies or any other woo peddler gets their bullshit published as supposed news.
Voice stress analysis is complete and utter pseudoscience. It doesn’t exist. It doesn’t work. Fabulous results are regularly claimed and never reproduced. Anyone trying to sell you voice stress analysis is a crook and a con man.
and speaking of woo peddlers! the worst thing about generative AI is that it can legitimize any woo, no matter how old or discredited it is. and that horseshit gets top billing alongside the words of actual scientists defending the tech and the frankly godawful results it gives when applied to any field where outputting large volumes of bullshit isn’t the point.