this post was submitted on 25 May 2024
775 points (97.1% liked)
Technology
59594 readers
3399 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
My experience with an AI coding tool today.
Me: Can you optimize this method.
AI: Okay, here's an optimized method.
Me seeing the AI completely removed a critical conditional check.
Me: Hey, you completely removed this check with variable xyz
Ai: oops you're right, here you go I fixed it.
It did this 3 times on 3 different optimization requests.
It was 0 for 3
Although there was some good suggestions in the suggestions once you get past the blatant first error
Don't mean to victim blame but i don't understand why you would use ChatGPT for hard problems like optimization. And i say this as a heavy ChatGPT/Copilot user.
From my observation, the angle of LLMs on code is linked to the linguistic / syntactic aspects, not to the technical effects of it.
Because I had some methods I thought were too complex and I wanted to see what it'd come up with?
In one case part of the method was checking if a value was within one of 4 ranges and it just dropped 2 of the ranges in the output.
I don't think that's asking too much of it.
Apparently it was :D i mean the confines of the tool are very limited, despite what the Devin.ai cult would like to believe.
That's been my experience with GPT - every answer Is a hallucination to some extent, so nearly every answer I receive is inaccurate in some ways. However, the same applies if I was asking a human colleague unfamiliar with a particular system to help me debug something - their answers will be quite inaccurate too, but I'm not expecting them to be accurate, just to have helpful suggestions of things to try.
I still prefer the human colleague in most situations, but if that's not possible or convenient GPT sometimes at least gets me on the right path.
And ya, it did provide some useful info, so it's not like it was all wrong.
I'm more just surprised that it was wrong in that way.
I'm curious about what percentage of programmers would give error free answers to these questions in seconds.
Probably less than the same amount of developers whose code runs on the first try.
My favorite is when I ask for something and it gets stuck in a loop, pasting the same comment over and over