this post was submitted on 02 Jul 2024
8 points (100.0% liked)
Aotearoa / New Zealand
1656 readers
5 users here now
Kia ora and welcome to !newzealand, a place to share and discuss anything about Aotearoa in general
- For politics , please use !politics@lemmy.nz
- Shitposts, circlejerks, memes, and non-NZ topics belong in !offtopic@lemmy.nz
- If you need help using Lemmy.nz, go to !support@lemmy.nz
- NZ regional and special interest communities
Rules:
FAQ ~ NZ Community List ~ Join Matrix chatroom
Banner image by Bernard Spragg
Got an idea for next month's banner?
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Work paid for me to go to a "getting started with AI for businesses" seminar run by [redacted reputable organisation name] and holy crap the FOMO.
The more I see & hear, the more I think its all grift.
Ie the crypto bros left their coins for nfts, and now they've tanked they're finding something else to burn the planet down in order to scam suckers.
I don't think it's all grift - there are absolutely places where LLMs are the best tech out there, but it's probably not going to take everyone's jobs any time soon (at least not on merit - in sure there are plenty of places that'd accept a 50% drop in quality for a 90% drop in price)
I've seen a pretty compelling case study of a company using an LLM as a "tier zero" support tech - instead of getting a tier 1 tech to classify a case, decide if they had the tools to address the issue or if it needs to go to tier 2, work out if it was an instance of a known issue etc before they actually start working on the problem, give the LLM some examples and get it to do the triage so the humans can do the more complicated stuff. It does about as well as a human, for a fraction of the price.
I'd have to see that in action before I pass judgement but given LLMs predilection for hallucination and the vagaries of how humans report tech faults I would be surprised if it was significantly more accurate or effective than a human. After all if its working out if there's a known issue then essentially its not much beyond a script at that point and in that case do you want to trade the unpredictability of what an LLM might recommend vs something (human or otherwise) that will follow the script?
Even if an LLM were an effective level 0 helpdesk it would still need to overcome the user's cultural expectation (in many places) that they can pick up the phone and speak to somebody about their problem. Having done that job a long long time ago, diagnosing tech problems for people who don't understand tech can be a fairly complex process. You have to work through their lack of understanding, lack of technical language. You sometimes have to pick up on cues in their hesitations, frustrated tone of voice etc.
I'm sure an LLM could synthesis that experience 80% of the time, but depending on the tech you're dealing with you could be missing some pretty major stuff in the 20%, especially if an LLM gives bad instructions, or closes without raising it etc. So you then need to pay someone to monitor the LLM and watch what its doing - at which point you've hired your level 1 tech again anyway.