this post was submitted on 13 Jun 2023
9 points (100.0% liked)

Technology

34950 readers
131 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

TL;DR: LLMs are just mimicking natural language and conversation. Fact checking and healthy skepticism is not part of their model. For example they can be easily tricked into advocating conspiracy theories, like a fake moon landing. Google Bard is even stating arithmetic falsehoods like 5*6 != 30

you are viewing a single comment's thread
view the rest of the comments
[–] Babalas 1 points 1 year ago* (last edited 1 year ago)

Seems more like different people expect it to behave differently. I mean the statement that it isn't intelligent because it can be made to believe conspiracy theories would apply equally to humans would it not?

I'm having a blast using it to write descriptions for characters and locations for my Savage Worlds game. It can even roll up an NPC for you. It's fantastic for helping to fill in details. I.e. I embrace it's hallucinations.

For work (programmer) it also acts like a contextually aware search engine that I can correct. It's like peer to peer programming with a genius grad. Yesterday I had it help me out writing a vim keymap to open a url for a Qt class and that's pretty obscure.

It is setup to accept your input as fact, so if you give it the premise that 5*6 != 30, it'll use that as a basis.

For a 3rd gen baby AI I'm not complaining.