this post was submitted on 26 Aug 2023
397 points (85.6% liked)

Technology

59657 readers
2930 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women's Hospital found that cancer treatment plans generated by OpenAI's revolutionary chatbot were full of errors.

top 50 comments
sorted by: hot top controversial new old
[–] zeppo@lemmy.world 223 points 1 year ago (8 children)

I’m still confused that people don’t realize this. It’s not an oracle. It’s a program that generates sentences word by word based on statistical analysis, with no concept of fact checking. It’s even worse that someone actually did a study instead of simply acknowledging or realizing that ChatGPT is happy to just make stuff up.

[–] Zeth0s@lemmy.world 32 points 1 year ago (1 children)

Publish or perish, that's why

load more comments (1 replies)
[–] net00@lemm.ee 21 points 1 year ago

Yeah this stuff was always marketed to automate simple and repetitive things we do daily. it's mostly the media I guess who started misleading everyone into thinking this was AI like skynet. It's still useful, not just as a all knowing AI god

[–] inspxtr@lemmy.world 16 points 1 year ago (2 children)

while I agree it has become more of a common knowledge that they’re unreliable, this can add on to the myriad of examples for corporations, big organizations and government to abstain from using them, or at least be informed about these various cases with their nuances to know how to integrate them.

Why? I think partly because many of these organizations are racing to adopt them, for cost-cutting purposes, to chase the hype, or too slow to regulate them, … and there are/could still be very good uses that justify it in the first place.

I don’t think it’s good enough to have a blanket conception to not trust them completely. I think we need multiple examples of the good, the bad and the questionable in different domains to inform the people in charge, the people using them, and the people who might be affected by their use.

Kinda like the recent event at DefCon trying to exploit LLMs, it’s not enough we have some intuition about their harms, the people at the event aim to demonstrate the extremes of such harms AFAIK. These efforts can help inform developers/researchers to mitigate them, as well as showing concretely to anyone trying to adopt them how harmful they could be.

Regulators also need these examples in specific domains so they may be informed on how to create policies on them, sometimes building or modifying already existing policies of such domains.

[–] zeppo@lemmy.world 10 points 1 year ago

This is true and well-stated. Mainly what I wish people would understand is there are current appropriate uses, like 'rewrite my marketing email', but generating information that could result in great harm if inaccurate is an inappropriate use. It's all about the specific model, though - if you had a ChatGPT system trained extensively on medical information, it would result in greater accuracy, but still the information would need expert human review before any decision were made. Mainly I wish the media had been more responsible and accurate in portraying these systems to the public.

[–] jvisick@programming.dev 7 points 1 year ago

I don’t think it’s good enough to have a blanket conception to not trust them completely.

On the other hand, I actually think we should, as a rule, not trust the output of an LLM.

They’re great for generative purposes, but I don’t think there’s a single valid case where the accuracy of their response should be outright trusted. Any information you get from an AI model should be validated outright.

There are many cases where a simple once-over from a human is good enough, but any time it tells you something you didn’t already know you should not trust it and, if you want to rely on that information, you should validate that it’s accurate.

[–] iforgotmyinstance@lemmy.world 11 points 1 year ago (4 children)

I know university professors struggling with this concept. They are so convinced using an LLM is plagiarism.

It can lead to plagiarism if you use it poorly, which is why you control the information you feed it. Then proofread and edit.

[–] zeppo@lemmy.world 12 points 1 year ago

Another related confusion in academia recently is the 'AI detector'. It could easily be defeated with minor rewrites, if they were even accurate in the first place. My favorite misconception is there was a story of a professor who told students "I asked ChatGPT if it wrote this, and it said yes" which is just really not how it works.

load more comments (3 replies)
[–] fubo@lemmy.world 8 points 1 year ago (5 children)

It’s even worse that someone actually did a study instead of simply acknowledging or realizing that ChatGPT is happy to just make stuff up.

Sure, the world should just trust preconceptions instead of doing science to check our beliefs. That worked great for tens of thousands of years of prehistory.

[–] zeppo@lemmy.world 28 points 1 year ago* (last edited 1 year ago) (1 children)

It's not merely a preconception. It's a rather obvious and well-known limitation of these systems. What I am decrying is that some people, from apparent ignorance, think things like "ChatGPT can give a reliable cancer treatment plan!" or "here, I'll have it write a legal brief and not even check it for accuracy". But sure, I agree with you, minus the needless sarcasm. It's useful to prove or disprove even absurd hypotheses. And clearly people need to be definitely told that ChatGPT is not always factual, so hopefully this helps.

[–] adeoxymus@lemmy.world 8 points 1 year ago (1 children)

I'd say that a measurement always trumps arguments. At least you know how accurate they are, this statement cannot follow from reason:

The JAMA study found that 12.5% of ChatGPT's responses were "hallucinated," and that the chatbot was most likely to present incorrect information when asked about localized treatment for advanced diseases or immunotherapy.

load more comments (1 replies)
[–] PetDinosaurs@lemmy.world 9 points 1 year ago

Why the hell are people down voting you?

This is absolutely correct. We need to do the science. Always. Doesn't matter what the theory says. Doesn't matter that our guess is probably correct.

Plus, all these studies tell us much more than just the conclusion.

[–] Takumidesh@lemmy.world 7 points 1 year ago

It's not even a preconception, it's willful ignorance, the website itself tells you multiple times that it is not accurate.

The bottom of every chat has this text: "Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts. ChatGPT August 3 Version"

And when you first use it, a modal pops up explaining the same thing.

[–] yiliu@informis.land 7 points 1 year ago

"After an extensive three-year study, I have discovered that touching a hot element with one's bare hand does, in fact, hurt."

"That seems like it was unnecessary..."

"Do U even science bro?!"

Not everything automatically deserves a study. Were there any non-rando people out there claiming that ChatGPT could totally generate legit cancer treatment plans that people could then follow?

load more comments (1 replies)
load more comments (3 replies)
[–] imperator3733@lemmy.world 64 points 1 year ago (11 children)

No duh - why would it have any ability to do that sort of task?

[–] xkforce@lemmy.world 33 points 1 year ago* (last edited 1 year ago)

Part of the reason for studies like this is to debunk peoples' expectations of AI's capabilities. A lot of people are under the impression that cgatGPT can do ANYTHING and can think and reason when in reality it is a bullshitter that does nothing more than mimic what it thinks a suitable answer looks like. Just like a parrot.

load more comments (10 replies)
[–] Uncaged_Jay@lemmy.world 50 points 1 year ago (2 children)

"Hey, program that is basically just regurgitating information, how do we do this incredibly complex things that even we don't understand yet?"

"Here ya go."

"Wow, this is wrong."

"No shit."

[–] JackbyDev@programming.dev 20 points 1 year ago* (last edited 1 year ago)

"Be aware that ChatGPT may produce wrong or inaccurate results, what is your question?"

How beat cancer

wrong, inaccurate information

😱

load more comments (1 replies)
[–] sentient_loom@sh.itjust.works 43 points 1 year ago (2 children)

Why the fuck would anybody think a chat bot could create a cancer treatment plan?

[–] 5BC2E7@lemmy.world 10 points 1 year ago (2 children)

Because it’s been hyped. They had announced it could pass the medical licensing exam with good scores. The belief that it can replace a doctor has already been put forward

load more comments (2 replies)
[–] solstice@lemmy.world 9 points 1 year ago* (last edited 1 year ago) (1 children)

On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

Charles Babbage

Better tech, same stupid end users lmao

load more comments (1 replies)
[–] elboyoloco@lemmy.world 42 points 1 year ago (1 children)

Scientist: Askes question to magic conch about cancer.

Conch: "Trying shoving bees up your ass."

Scientists: 😡

load more comments (1 replies)
[–] CombatWombat1212@lemmy.world 40 points 1 year ago (1 children)

When did they ever claim that was able to

load more comments (1 replies)
[–] Kodemystic@lemmy.kodemystic.dev 37 points 1 year ago (3 children)

Who tf is asking chatgpt for cancer treatments anyway?

[–] solstice@lemmy.world 11 points 1 year ago

It's hilarious to me that people need to be told word for word that chat gpt is NOT literally the cure for cancer.

load more comments (2 replies)
[–] Pyr_Pressure@lemmy.ca 35 points 1 year ago (4 children)

Chatgpt is a language / chatbot. Not a doctor. Has anyone claimed that it's a doctor?

load more comments (4 replies)
[–] obinice@lemmy.world 33 points 1 year ago (1 children)

Well, it's a good thing absolutely no clinician is using it to figure out how to treat their patient's cancer.... then?

I imagine it also struggles when asked to go to the kitchen and make a cup of tea. Thankfully, nobody asks this, because it's outside of the scope of the application.

[–] clutch@lemmy.ml 11 points 1 year ago (2 children)

The fear is that hospital administrators equipped with their MBA degrees will think about using it to replace expensive, experienced physicians and diagnosticians

[–] whoisearth@lemmy.ca 11 points 1 year ago

They've been trying this shit for decades already with established AI like Big Blue. This isn't a new pattern. Those in charge need to keep driving costs down and profit up.

Race to the bottom.

load more comments (1 replies)
[–] Rexios@lemm.ee 28 points 1 year ago

Okay and? GPT lies how is this news every other day? Lazy ass journalists.

[–] Sanctus@lemmy.world 19 points 1 year ago (10 children)

These studies are for the people out there who think ChatGPT thinks. Its a really good email assistant, and it can even get basic programming questions right if you are detailed with your prompt. Now everyone stop trying to make this thing like Finn's mom in adventure time and just use it to helo you write a long email in a few seconds. Jfc.

load more comments (10 replies)
[–] TenderfootGungi@lemmy.world 19 points 1 year ago

The computer science classroom in my high school had a poster stating: “Garbage in garbage out”

[–] LazyBane@lemmy.world 17 points 1 year ago (1 children)

People really need to get in their heads that AI can "hallucinate" random information and that any implementation on an AI needs a qualified human overseeing it.

load more comments (1 replies)
[–] NigelFrobisher@aussie.zone 16 points 1 year ago (5 children)

People really need to understand what LLMs are, and also what they are not. None of the messianic hype or even use of the term “AI” helps with this, and most of the ridiculous claims made in the space make me expect Peter Molyneux to be involved somehow.

load more comments (5 replies)
[–] Prethoryn@lemmy.world 15 points 1 year ago (2 children)

Look, I am all for seeing pros and cons. A.I. has a massive benefit to humanity and it has its issues but this article is just silly.

Why in the fuck are you using ChatGPT to set a cancer plan? When did ChatGPT claim to be a medical doctor.

Just go see a damn doctor.

[–] Kage520@lemmy.world 8 points 1 year ago (1 children)

I have been getting surveys asking my opinion on ai as a healthcare practitioner (pharmacist). I feel like they are testing the waters.

AI is really dangerous for healthcare right now. I'm sure people are using it to ask regular questions they normally Google. I'm sure administrators are trying to see how they can use it to "take the pressure off" their employees (then fire some employees to "tighten the belt").

If they can figure out how to fact check the AI results, maybe my opinion can change, but as long as AI can convincingly lie and not even know it's lying, it's a super dangerous tool.

load more comments (1 replies)
load more comments (1 replies)
[–] SirGolan@lemmy.sdf.org 14 points 1 year ago (12 children)

What's with all the hit jobs on ChatGPT?

Prompts were input to the GPT-3.5-turbo-0301 model via the ChatGPT (OpenAI) interface.

This is the second paper I've seen recently to complain ChatGPT is crap and be using GPT3.5. There is a world of difference between 3.5 and 4. Unfortunately news sites aren't savvy enough to pick up on that and just run with "ChatGPT sucks!" Also it's not even ChatGPT if they're using that model. The paper is wrong (or it's old) because there's no way to use that model in the ChatGPT interface. I don't think there ever was either. It was probably ChatGPT 0301 or something which is (afaik) slightly different.

Anyway, tldr, paper is similar to "I tried running Diablo 4 on my Windows 95 computer and it didn't work. Surprised Pikachu!"

load more comments (12 replies)
[–] Quexotic@infosec.pub 14 points 1 year ago* (last edited 1 year ago) (2 children)

This is just stupid clickbait. Would you use a screwdriver as a hammer? No. Of course not. Anyone with even a little bit of sense understands that GPT is useful for some things and not others. Expecting it to write a cancer treatment plan, it's just outlandish.

Even GPT says:I'm not a substitute for professional medical advice. Creating a cancer treatment plan requires specialized medical knowledge and the input of qualified healthcare professionals. It's important to consult with oncologists and medical experts to develop an appropriate and effective treatment strategy for cancer patients. If you have questions about cancer treatment, I recommend reaching out to a medical professional.

load more comments (2 replies)
[–] sturmblast@lemmy.world 13 points 1 year ago

Why is anyone surprised by this? It's not meant to be your doctor.

[–] eager_eagle@lemmy.world 10 points 1 year ago (4 children)
load more comments (4 replies)
[–] j4yt33@feddit.de 10 points 1 year ago (1 children)

Why would you ask it to do that in the first place??

[–] dmonzel@lemmy.world 7 points 1 year ago

To prove to all of the tech bros that ChatGPT isn't an actual AI, perhaps. At least that's the feeling I get based on what the article says.

[–] UnbeatenDeployGoofy@lemmy.ml 10 points 1 year ago (4 children)

I suppose most sensible people already know that ChatGPT is not the answer for medical diagnosis.

Prompts were input to the GPT-3.5-turbo-0301 model via the ChatGPT (OpenAI) interface.

If the researcher wanted to investigate whether LLM is helpful, they should develop a model specifically using cancer treatment plans with GPT-4/3.5 before testing it thoroughly, in addition to entering prompts into the model that is available on OpenAI.

load more comments (4 replies)
[–] mwguy@infosec.pub 9 points 1 year ago* (last edited 1 year ago)

I asked a retard to spend a week looking at medical treatment plans and related information on the internet. Then asked him to guestimate a treatment plan for my actual cancer patient. How could they have got it wrong!

This is how I translate all these AI Language model says bullshit, bullshit.

[–] SolNine@lemmy.ml 8 points 1 year ago (4 children)

GPT has been utter garbage lately. I feel as though it's somehow become worse. I use it as a search engine alternative and it has RARELY been correct lately. I will respond to it, telling it that it is incorrect, and it will keep generating even more inaccurate answers... It's to the point where it almost becomes entirely useless, where sometimes it used to find some of the correct information.

I don't know what they did in 4.0 or whatever it is, but it's just plain bad.

load more comments (4 replies)
[–] unreachable@lemmy.my.id 8 points 1 year ago

chatgpt/bard is only the next iteration of MegaHAL

that's why they called it "large language model", not "artificial intelligent"

[–] KIM_JONG_JUICEBOX@lemmy.ml 8 points 1 year ago

Was this article summary written by chatgpt?

[–] autotldr@lemmings.world 7 points 1 year ago

This is the best summary I could come up with:


According to the study, which was published in the journal JAMA Oncology and initially reported by Bloomberg – when asked to generate treatment plans for a variety of cancer cases, one-third of the large language model's responses contained incorrect information.

The chatbot sparked a rush to invest in AI companies and an intense debate over the long-term impact of artificial intelligence; Goldman Sachs research found it could affect 300 million jobs globally.

Famously, Google's ChatGPT rival Bard wiped $120 billion off the company's stock value when it gave an inaccurate answer to a question about the James Webb space telescope.

Earlier this month, a major study found that using AI to screen for breast cancer was safe, and suggested it could almost halve the workload of radiologists.

A computer scientist at Harvard recently found that GPT-4, the latest version of the model, could pass the US medical licensing exam with flying colors – and suggested it had better clinical judgment than some doctors.

The JAMA study found that 12.5% of ChatGPT's responses were "hallucinated," and that the chatbot was most likely to present incorrect information when asked about localized treatment for advanced diseases or immunotherapy.


The original article contains 523 words, the summary contains 195 words. Saved 63%. I'm a bot and I'm open source!

load more comments
view more: next ›