this post was submitted on 16 Dec 2024
590 points (98.7% liked)

Technology

59970 readers
4214 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments

One major problem with the current generation of "AI"seems to be it's inability to use relevant information that it already has to assess the accuracy of the answers it provides.

Here's a common scenario I've run into: I'm trying to create a complex DAX Measure in Excel. I give ChatGPT the information about the tables I'm working with and the expected Pivot Table column value.

ChatGPT gives me a response in the form of a measure I can use. Except it uses one DAX function in a way that will not work. I point out the error and ChatGPT is like, "Oh, sorry. Yeah that won't work because [insert correct reason here].

I'll try adjusting my prompt a few more times before finally giving up and just writing the measure myself. It does not have the ability to reason that an answer is incorrect even though it has all the information to know that the answer is incorrect and can even tell you why the answer is incorrect. It's a glorified text generator and is definitely not "intelligent".

It works fine for generating boiler plate code but that problem was already solved years ago with things like code templates.