this post was submitted on 17 Sep 2023
2 points (100.0% liked)

TechTakes

1438 readers
48 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

the writer Nina Illingworth, whose work has been a constant source of inspiration, posted this excellent analysis of the reality of the AI bubble on Mastodon (featuring a shout-out to the recent articles on the subject from Amy Castor and @dgerard@awful.systems):

Naw, I figured it out; they absolutely don't care if AI doesn't work.

They really don't. They're pot-committed; these dudes aren't tech pioneers, they're money muppets playing the bubble game. They are invested in increasing the valuation of their investments and cashing out, it's literally a massive scam. Reading a bunch of stuff by Amy Castor and David Gerard finally got me there in terms of understanding it's not real and they don't care. From there it was pretty easy to apply a historical analysis of the last 10 bubbles, who profited, at which point in the cycle, and where the real money was made.

The plan is more or less to foist AI on establishment actors who don't know their ass from their elbow, causing investment valuations to soar, and then cash the fuck out before anyone really realizes it's total gibberish and unlikely to get better at the rate and speed they were promised.

Particularly in the media, it's all about adoption and cashing out, not actually replacing media. Nobody making decisions and investments here, particularly wants an informed populace, after all.

the linked mastodon thread also has a very interesting post from an AI skeptic who used to work at Microsoft and seems to have gotten laid off for their skepticism

you are viewing a single comment's thread
view the rest of the comments
[–] Soyweiser@awful.systems 1 points 1 year ago* (last edited 1 year ago) (1 children)

Only five years ago no one in the computer science industry would have taken a bet that AI would be able to explain why a joke was funny

Iirc it still couldn't do that, if you create variants of jokes it patterns matches it to the OG of the joke and fails.

or perform creative tasks.

Euh what, various creative tasks have been done by AI for a while now. Deepdream is almost a decade old now, and before that where were all kinds of procedural generation tools etc etc. Which could do the same as now, create a very limited set of creative things out of previous data. Same as AI now. This chatgpt cannot create a truly unique new sentence for example (A thing any of us here could easily do).

[–] Akisamb@programming.dev 0 points 1 year ago (2 children)

This chatgpt cannot create a truly unique new sentence for example (A thing any of us here could easily do).

What ?

Of course it can, it's randomly generating sentences. It's probably better than humans at that. If you want more randomness at the cost of text coherence just increase the temperature.

[–] self@awful.systems 1 points 1 year ago (1 children)

Of course it can, it’s randomly generating sentences. It’s probably better than humans at that. If you want more randomness at the cost of text coherence just increase the temperature.

you mean like a Markov chain?

[–] Akisamb@programming.dev 0 points 1 year ago* (last edited 1 year ago) (1 children)

These models are Markov chains yes. But many things are Markov chains, I'm not sure that describing these as Markov chains helps gain understanding.

The way these models generate text is iterative. They do it word by word. Every time they need to generate a word they will randomly select one from their vocabulary. The trick to generating coherent text is that different words are more likely to happen depending on the previous words.

For example for the sentence "that is a huge grey" the word elephant is more likely than flamingo.

The temperature is the way you select your word. If it is low you will always select the most likely word. Increasing the temperature will make the random choice more random giving each word a more equal chance.

Seeing as these models function randomly there is nothing preventing them from producing unique text. After all, something like jsbHsbe d dhebsUd is unique but not very interesting.

[–] self@awful.systems 1 points 1 year ago

But many things are Markov chains

I don’t get particularly excited for algorithms from 1972 that come included with emacs, alongside Tetris and a janky text adventure but that is indeed the algorithm you’re rather excitedly describing

snore I guess

[–] Soyweiser@awful.systems 1 points 1 year ago

People tried this and it just generated the same chatgpt trite.