this post was submitted on 11 Feb 2025
530 points (98.7% liked)

Technology

63082 readers
6424 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] db0@lemmy.dbzer0.com 90 points 1 week ago (13 children)

As always, never rely on llms for anything factual. They're only good with things which have a massive acceptance for error, such as entertainment (eg rpgs)

[–] kboy101222@sh.itjust.works 25 points 1 week ago (1 children)

I tried using it to spit ball ideas for my DMing. I was running a campaign set in a real life location known for a specific thing. Even if I told it to not include that thing, it would still shoe horn it in random spots. It quickly became absolutely useless once I didn't need that thing included

Sorry for being vague, I just didn't want to post my home town on here

[–] homesweethomeMrL@lemmy.world 11 points 1 week ago

You can say Space Needle. We get it.

[–] 1rre@discuss.tchncs.de 13 points 1 week ago (1 children)

The issue for RPGs is that they have such "small" context windows, and a big point of RPGs is that anything could be important, investigated, or just come up later

Although, similar to how deepseek uses two stages ("how would you solve this problem", then "solve this problem following this train of thought"), you could have an input of recent conversations and a private/unseen "notebook" which is modified/appended to based on recent events, but that would need a whole new model to be done properly which likely wouldn't be profitable short term, although I imagine the same infrastructure could be used for any LLM usage where fine details over a long period are more important than specific wording, including factual things

[–] db0@lemmy.dbzer0.com 13 points 1 week ago

The problem is that the "train of the thought" is also hallucinations. It might make the model better with more compute but it's diminishing rewards.

Rpg can use the llms because they're not critical. If the llm spews out nonsense you don't like, you just ask to redo, because it's all subjective.

[–] kat@orbi.camp 5 points 1 week ago

Or at least as an assistant on a field your an expert in. Love using it for boilerplate at work (tech).

load more comments (10 replies)
[–] mentalNothing@lemmy.world 65 points 1 week ago

Idk guys. I think the headline is misleading. I had an AI chatbot summarize the article and it says AI chatbots are really, really good at summarizing articles. In fact it pinky promised.

[–] homesweethomeMrL@lemmy.world 45 points 1 week ago (19 children)

Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.

It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

Introduced factual errors

Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.

[–] chud37@lemm.ee 9 points 1 week ago (1 children)

that's the core problem though, isn't it. They are just predictive text machines, not understanding what they are saying. Yet we are treating them as if they were some amazing solution to all our problems

[–] homesweethomeMrL@lemmy.world 3 points 1 week ago

Well, "we" arent' but there's a hype machine in operation bigger than anything in history because a few tech bros think they're going to rule the world.

[–] devfuuu@lemmy.world 7 points 1 week ago* (last edited 1 week ago)

I'll be here begging for a miserable 1 million to invest in some freaking trains and bicycle paths. Thanks.

load more comments (17 replies)
[–] brucethemoose@lemmy.world 31 points 1 week ago* (last edited 1 week ago) (19 children)

What temperature and sampling settings? Which models?

I've noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.

I find my local thinking models (FuseAI, Arcee, or Deepseek 32B 5bpw at the moment) are quite good at summarization at a low temperature, which is not what these UIs default to, and I get to use better sampling algorithms than any of the corporate APis. Same with “affordable” flagship API models (like base Deepseek, not R1). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.

My point is that LLMs as locally hosted tools you understand the mechanics/limitations of are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification and crypto-bro type hype in one package.

[–] 1rre@discuss.tchncs.de 11 points 1 week ago (2 children)

I've found Gemini overwhelmingly terrible at pretty much everything, it responds more like a 7b model running on a home pc or a model from two years ago than a medium commercial model in how it completely ignores what you ask it and just latches on to keywords... It's almost like they've played with their tokenisation or trained it exclusively for providing tech support where it links you to an irrelevant article or something

load more comments (2 replies)
[–] paraphrand@lemmy.world 8 points 1 week ago* (last edited 1 week ago) (3 children)

I don’t think giving the temperature knob to end users is the answer.

Turning it to max for max correctness and low creativity won’t work in an intuitive way.

Sure, turning it down from the balanced middle value will make it more “creative” and unexpected, and this is useful for idea generation, etc. But a knob that goes from “good” to “sort of off the rails, but in a good way” isn’t a great user experience for most people.

Most people understand this stuff as intended to be intelligent. Correct. Etc. Or they At least understand that’s the goal. Once you give them a knob to adjust the “intelligence level,” you’ll have more pushback on these things not meeting their goals. “I clearly had it in factual/correct/intelligent mode. Not creativity mode. I don’t understand why it left out these facts and invented a back story to this small thing mentioned…”

Not everyone is an engineer. Temp is an obtuse thing.

But you do have a point about presenting these as cloud genies that will do spectacular things for you. This is not a great way to be executing this as a product.

I loathe how these things are advertised by Apple, Google and Microsoft.

[–] brucethemoose@lemmy.world 5 points 1 week ago* (last edited 1 week ago)
  • Temperature isn't even "creativity" per say, it's more a band-aid to patch looping and dryness in long responses.

  • Lower temperature is much better with modern sampling algorithms, E.G., MinP, DRY, maybe dynamic temperature like mirostat and such. Ideally, structure output, too. Unfortunately, corporate APIs usually don't offer this.

  • It can be mitigated with finetuning against looping/repetition/slop, but most models are the opposite, massively overtuning on their own output which "inbreeds" the model.

  • And yes, domain specific queries are best. Basically the user needs separate prompt boxes for coding, summaries, creative suggestions and such each with their own tuned settings (and ideally tuned models). You are right, this is a much better idea than offering a temperature knob to the user, but... most UIs don't even do this for some reason?

What I am getting at is this is not a problem companies seem interested in solving.They want to treat the users as idiots without the attention span to even categorize their question.

load more comments (2 replies)
[–] jrs100000@lemmy.world 6 points 1 week ago* (last edited 1 week ago)

They were actually really vague about the details. The paper itself says they used GPT-4o for ChatGPT, but apparently they didnt even note what versions of the other models were used.

load more comments (16 replies)
[–] Turbonics@lemmy.sdf.org 30 points 1 week ago (1 children)

BBC is probably salty the AI is able to insert the word Israel alongside a negative term in the headline

[–] Krelis_@lemmy.world 15 points 1 week ago* (last edited 1 week ago) (1 children)

Some examples of inaccuracies found by the BBC included:

Gemini incorrectly said the NHS did not recommend vaping as an aid to quit smoking

ChatGPT and Copilot said Rishi Sunak and Nicola Sturgeon were still in office even after they had left

Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed "restraint" and described Israel's actions as "aggressive"

[–] Turbonics@lemmy.sdf.org 1 points 5 days ago

Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed “restraint” and described Israel’s actions as “aggressive”

I did not even read up to there but wow BBC really went there openly.

[–] Petter1@lemm.ee 25 points 1 week ago

ShockedPikachu.svg

[–] Etterra@discuss.online 24 points 1 week ago

You don't say.

[–] Prandom_returns@lemm.ee 21 points 1 week ago (1 children)

But every techbro on the planet told me it's exactly what LLMs are good at. What the hell!? /s

https://lemm.ee/comment/18029491

[–] heavydust@sh.itjust.works 13 points 1 week ago (1 children)

Not only techbros though. Most of my friends are not into computers but they all think AI is magical and will change the whole world for the better. I always ask "how can a blackbox that throws up random crap and runs on the computers of big companies out of the country would change anything?" They don't know what to say but they still believe something will happen and a program can magically become sentient. Sometimes they can be fucking dumb but I still love them.

[–] shrugs@lemmy.world 3 points 1 week ago (1 children)

the more you know what you are doing the less impressed you are by ai. calling people that trust ai idiots is not a good start to a conversation though

load more comments (1 replies)
[–] Phoenicianpirate@lemm.ee 14 points 1 week ago (6 children)

I learned that AI chat bots aren't necessarily trustworthy in everything. In fact, if you aren't taking their shit with a grain of salt, you're doing something very wrong.

[–] Redex68@lemmy.world 7 points 1 week ago (8 children)

This is my personal take. As long as you're careful and thoughtful whenever using them, they can be extremely useful.

load more comments (8 replies)
load more comments (5 replies)
[–] tal@lemmy.today 12 points 1 week ago* (last edited 1 week ago) (2 children)

They are, however, able to inaccurately summarize it in GLaDOS's voice, which is a strong point in their favor.

[–] JackGreenEarth@lemm.ee 3 points 1 week ago (4 children)

Surely you'd need TTS for that one, too? Which one do you use, is it open weights?

load more comments (4 replies)
load more comments (1 replies)
[–] Teknikal@eviltoast.org 12 points 1 week ago (3 children)

I just tried it on deepseek it did it fine and gave the source for everything it mentioned as well.

[–] datalowe@lemmy.world 12 points 1 week ago

Do you mean you rigorously went through a hundred articles, asking DeepSeek to summarise them and then got relevant experts in the subject of the articles to rate the quality of answers? Could you tell us what percentage of the summaries that were found to introduce errors then? Literally 0?

Or do you mean that you tried having DeepSeek summarise a couple of articles, didn't see anything obviously problematic, and figured it is doing fine? Replacing rigorous research and journalism by humans with a couple of quick AI prompts, which is the core of the issue that the article is getting at. Because if so, please reconsider how you evaluate (or trust others' evaluations of) information tools which might help or help destroy democracy.

load more comments (2 replies)
[–] chemical_cutthroat@lemmy.world 11 points 1 week ago

Which is hilarious, because most of the shit out there today seems to be written by them.

[–] TroublesomeTalker@feddit.uk 10 points 1 week ago (3 children)

But the BBC is increasingly unable to accurately report the news, so this finding is no real surprise.

load more comments (3 replies)
[–] buddascrayon@lemmy.world 6 points 1 week ago (2 children)

That's why I avoid them like the plague. I've even changed almost every platform I'm using to get away from the AI-pocalypse.

[–] echodot@feddit.uk 10 points 1 week ago* (last edited 1 week ago) (2 children)

I can't stand the corporate double think.

Despite the mountains of evidence that AI is not capable of something even basic as reading an article and telling you what is about it's still apparently going to replace humans. How do they come to that conclusion?

The world won't be destroyed by AI, It will be destroyed by idiot venture capitalist types who reckon that AI is the next big thing. Fire everyone, replace it all with AI; then nothing will work and nobody will be able to buy anything because nobody has a job.

Que global economic collapse.

load more comments (2 replies)
load more comments (1 replies)
[–] untorquer@lemmy.world 5 points 1 week ago

Fuckin news!

[–] underwire212@lemm.ee 4 points 1 week ago

News station finds that AI is unable to perform the job of a news station

🤔

[–] NutWrench@lemmy.world 3 points 1 week ago

But AI is the wave of the future! The hot, NEW thing that everyone wants! ** furious jerking off motion **

load more comments
view more: next ›