this post was submitted on 18 Sep 2024
112 points (98.3% liked)

United Kingdom

4076 readers
76 users here now

General community for news/discussion in the UK.

Less serious posts should go in !casualuk@feddit.uk or !andfinally@feddit.uk
More serious politics should go in !uk_politics@feddit.uk.

Try not to spam the same link to multiple feddit.uk communities.
Pick the most appropriate, and put it there.

Posts should be related to UK-centric news, and should be either a link to a reputable source, or a text post on this community.

Opinion pieces are also allowed, provided they are not misleading/misrepresented/drivel, and have proper sources.

If you think "reputable news source" needs some definition, by all means start a meta thread.

Posts should be manually submitted, not by bot. Link titles should not be editorialised.

Disappointing comments will generally be left to fester in ratio, outright horrible comments will be removed.
Message the mods if you feel something really should be removed, or if a user seems to have a pattern of awful comments.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Streetlights@lemmy.world 11 points 1 month ago (2 children)

20 years ago there were complaints that GP's were using Google, now its normal. Can't help but feel the same will happen here.

[–] TheGrandNagus@lemmy.world 4 points 1 month ago

You're right. Within 10 seconds I just found an article from 2006 saying just that. Earlier ones likely exist.

[–] Swedneck@discuss.tchncs.de 2 points 1 month ago (1 children)

to be fair back then google just showed you what you searched for, i'm not as happy about people googling stuff these days. With AI we already know that it tends to make shit up, and it might very well only get worse as they start being trained on their own output.

[–] echodot@feddit.uk -1 points 1 month ago

Actually hallucinations have gone down as AI training has increased. Mostly through things like prompting them to provide evidence. When you prompt them to provide evidence they don't hallucinate in the first place.

The problem is really to do with the way the older AIs were originally trained. They were basically trained on data where a question was asked, and then a response was given. Nowhere in the data set was there a question that was asked, and the answer was "I'm sorry I do not know", so the AI basically was unintentionally taught that it is never acceptable to not answer a question. More modern AI have been trained in a better way and have been told it is acceptable not to answer a question. Combined with the fact that they now have the ability to perform internet searches, so like a human they can go look up data if they recognize that they don't have access to it in their current data set.

That being said, Google's AI is an idiot.