this post was submitted on 01 Aug 2024
2232 points (99.0% liked)

Technology

59675 readers
2998 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
(page 5) 50 comments
sorted by: hot top controversial new old
[–] ChaoticEntropy@feddit.uk 5 points 3 months ago (1 children)

Who knew that new technologies that are great for businesses' bottom lines wouldn't also be great for consumer satisfaction.

Say it ain't so.

[–] rottingleaf@lemmy.world 3 points 3 months ago (3 children)

Initially great for bottom lines. Then consumer dissatisfaction finds a way.

Unless you have legally reinforced monopoly.

load more comments (3 replies)
[–] swag_money@lemmy.world 4 points 3 months ago
[–] Verserk@lemmy.dbzer0.com 4 points 3 months ago

More like people know when it's just being used as a buzzword and are smart to avoid when that's (often) the case

[–] x00z@lemmy.world 4 points 3 months ago

Well, maybe if they weren't using AI as a hypeword and just called it adaptive or GPT.

[–] cellardoor@lemmy.world 4 points 3 months ago

No shit Sherlock

[–] jubilationtcornpone@sh.itjust.works 3 points 3 months ago (1 children)

I think there is potential for using AI as a knowledge base. If it saves me hours of having to scour the internet for answers on how to do certain things, I could see a lot of value in that.

The problem is that generative AI can't determine fact from fiction, even though it has enough information to do so. For instance, I'll ask Chat GPT how to do something and it will very confidently spit out a wrong answer 9/10 times. If I tell it that that approach didn't work, it will respond with "Sorry about that. You can't do [x] with [y] because [z] reasons." The reasons are often correct but ChatGPT isn't "intelligent" enough to ascertain that an approach will fail based on data that it already has before suggesting it.

It will then proceed to suggest a variation of the same failed approach several more times. Every once in a while it will eventually pivot towards a workable suggestion.

So basically, this generation of AI is just Cliff Clavin from Cheers. Able to to sting together coherent sentences of mostly bullshit.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›