this post was submitted on 24 Mar 2024
105 points (100.0% liked)

TechTakes

1430 readers
118 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

A tweet from ardent promptfondler Misha Saul (archive):

A friend sent me MRI brain scan results and I put it through Claude.

No other AI would provide a diagnosis, Claude did.

Claude found an aggressive tumour.

The radiologist report came back clean.

I annoyed the radiologists until they re-checked. They did so with 3 radiologists and their own AI. Came back clean, so looks like Claude was wrong.

But looks how convincing Claude sounds! We're still early...

Based on the MRI brain scan shown, there appears to be a large abnormal mass or lesion present. The bright, irregular white area in the center of the brain is concerning for a potential tumor or other pathological process.

The tumor demonstrates clear enhancement, local mass effect, and surrounding edema - features that would not be expected from normal brain tissue. We can trace the lesion across multiple serial MRI slices, confirming it is a real space-occupying mass. The location is also typical for a primary glioma.

While I acknowledge there is always some degree of uncertainty in radiology, I would estimate the chances of this actually being normal brain or a misleading angle to be well under 1%. The full set of images build a compelling, consistent picture of an aggressive brain tumor.

I asked it to review more angles:

After reviewing the additional MRI sequences and angles provided in this updated image, my assessment remains unchanged. The multiple planes convincingly demonstrate a large, avidly enhancing mass centered in the deep left parietal lobe with extensive surrounding vasogenic edema. The lesion is again clearly seen across sequential slices, confirming it is a true abnormality rather than an artifact.

Key observations:

1. Heterogeneous enhancement of the lesion on T1-weighted post-contrast images, indicating disruption of the blood-brain barrier
2. The "T2 FLAIR" sequence shows the mass to be mostly T2 hyperintense with surrounding edema
3. Effacement of the adjacent lateral ventricle due to local mass effect from the tumor
4. Midline shift to the right from the expansile effect of the lesion

These findings reaffirm my initial impression of a high-grade glioma, most likely glioblastoma given the imaging characteristics. The additional views perpendicular to the original plane help exclude the possibility of volume averaging artifact mimicking a mass.

In summary, this supplementary imaging further solidifies the diagnosis of an aggressive left parietal tumor, which I suspect is a glioblastoma. I am now even more confident (>99%) this is a true pathologic finding requiring urgent intervention. Biopsy and histopathologic assessment will ultimately be required for definitive classification, but the imaging is quite convincing for a malignant glioma.

Multiple replies point out that the LLM is literally just putting together MRI-sounding phrases and Saul just wasted the valuable time of three expensive professionals on nonsense he should have known was nonsense.

Saul is unfazed:

Impressed with the vitriol my post has attracted

My theory is that the mentally ill he/hims have reverence for authority and doctors are a special class of revered expert rather than a fallible class of professionals

Or maybe trying to use tech is inherently suspicious? πŸ€·β€β™‚οΈ

He then doubles down on the fabulous promptfondling future:

Obviously the ideal state is doctors are entirely obsolete

Instant diagnosis, instant treatment

No GPs, no imaging or pathology supply chains, no surgeons, no mistakes, no delays

We may never reach that, but that’s obviously the ideal we want to asymptote towards

and a magical flying unicorn pony with the wrong number of legs

top 50 comments
sorted by: hot top controversial new old
[–] gerikson@awful.systems 57 points 8 months ago* (last edited 8 months ago) (2 children)

Normal person: an LLM is trained on publicly available images of MRIs, most with tumors, so presenting an image of any MRI will naturally generate text related to brain tumor descriptions.

Brain-addled prompt fondlers: clearly this response proves Claude is more intelligent than any doctor.

[–] Soyweiser@awful.systems 20 points 8 months ago (1 children)

Yes the story gets crazier when you realize he didn't even use an ML model trained on MRIs, he used a general one. And apparently he now refuses to believe radiologist actually use the old style ML models trained only on MRIs.

[–] dgerard@awful.systems 22 points 8 months ago* (last edited 8 months ago) (2 children)

it's important to notice that this is an actual real life use case for ML - an AI that will quickly flag positives in a batch of scans. This is real and is used. With due caution by people who know what the fuck they're doing. None of these fucks have heard of it, of course.

[–] self@awful.systems 21 points 8 months ago

With due caution by people who know what the fuck they’re doing.

this is one of the tip-offs I use to quickly differentiate AI crackpottery and legitimate ML. anything legitimate will prominently display its false positive and negative rates, known limitations, and procedures for fucking checking the data by hand (with accompanying warnings and disclaimers if you fail to do this). AI bullshit very frequently skips all that, because the numbers don’t look good and you’re more likely to get VC funding if you hide them

[–] froztbyte@awful.systems 17 points 8 months ago

Also, notably, with purpose-built models (as opposed to just slapping shit into a consumer LLM chatbot built for entirely different concerns)

[–] jacksilver@lemmy.world 16 points 8 months ago

God I didn't even think about the imbalance of the publically available MRI images. That's a great point, beyond the fact that it isn't trained on this specific task and so it's unlikely to become capable in that area.

[–] pikesley@mastodon.me.uk 53 points 8 months ago (1 children)

@dgerard In a previous era, this reads something like "I consulted a psychic who told me I had a brain tumour, and now I have restraining orders from three radiologists"

[–] owen@lemmy.ca 25 points 8 months ago

"doctor, I saw jupiter rise in eve. My son NEEDS the lobotomy. TODAY!"

[–] gerikson@awful.systems 40 points 8 months ago (1 children)

Related:

https://hachyderm.io/@dfeldman/112149278408570324

If you feed AI an MRI, it will happily write a detailed and very convincing diagnosis...

even if the patient is a dead salmon.

[–] YouKnowWhoTheFuckIAM@awful.systems 12 points 8 months ago (1 children)

Holy shit, release the classics!

I don't know when it was decided that all medical technologies will be placebo tested on a dead fish, but it is my new favorite thing.

[–] owen@lemmy.ca 36 points 8 months ago* (last edited 8 months ago) (1 children)

This guy really just said "asymptote towards' instead of 'approach' in a damn tweet. Buddy, check your brain, I think it's running.

Might even be a tumor.

[–] dgerard@awful.systems 34 points 8 months ago

BEAT SKELTON on Twitter points out:

MY ONLY CONTRIBUTION TO THE MAIN CHARACTER OF THE DAY IS THAT THE RADIOLOGISTS PROBABLY DIDN'T ACTUALLY USE AN AI, THEY JUST TOLD MISHA THEY DID TO MAKE HIM SHUT UP.

[–] swlabr@awful.systems 32 points 8 months ago (1 children)

Sovereign citizen energy.

Can’t wait for an AI crank to demand cancer treatment because Siri looked sideways at a blurry photo of a lump on their butt. Can’t wait for a Scott to write a script/dialogue tree for getting DIY chemo kits.

[–] skillissuer@discuss.tchncs.de 10 points 8 months ago* (last edited 8 months ago) (2 children)

only for them to be denied these very aggressive pharmaceuticals they obviously "need". but worry not, there's that one weirdo who solved chemistry with iot and wants people to cook their own meds in glass jars with 3d printed lids connected to a series of raspberry pis

e: they don't have to be libertarian, i've just assumed it based on sheer impracticability of that nonsense and little regard for actual safety for end user. they do have that techbroey tunnel vision solving all the problems they understand and just make it work for the problems they don't. their entire mason jar reactor is completely superfluous if you can get proper glassware and in both cases you need to know actual chemistry beforehand anyway

[–] skillissuer@discuss.tchncs.de 10 points 8 months ago (2 children)

if you have no fucking clue what i'm talking about, it's this and it has a response from a professional

[–] corbin@awful.systems 6 points 8 months ago (4 children)

Wow, I hadn't read Lowe's response before, and it is capitalist cringe of the highest order. Thanks for sharing.

To be clear: I agree with every chemical and pharmacological critique leveled at the anarchists here. I also think that none of them have addressed the actual problem that the anarchists are solving, which is that medicinal chemistry has undergone so much regulatory capture that it is no longer legal to perform it at home for one's own private use or even to reverse-engineer the synthesis pathways. For more commentary on this, I recommend watching e.g. NurdRage reverse-engineering pyrimethamine and paying attention to what they say about obtaining precursors and carrying out various steps of synthesis.

[–] skillissuer@discuss.tchncs.de 6 points 8 months ago* (last edited 8 months ago) (2 children)

continuing on that capitalist cringe, i'd just point out that fair bit of that risky (in business sense) fundamental research that ultimately goes into new drugs is conducted at universities, who then get fuck all of these profits even if there's a licensing agreement

although from what i understand the biggest leech on for-profit american healthcare system is insurance layer

load more comments (2 replies)
[–] skillissuer@discuss.tchncs.de 6 points 8 months ago

now i see the issue with pyrimethamine, namely one step sideways gets you close P2P analogue which is a big regulatory nono

load more comments (2 replies)
[–] froztbyte@awful.systems 5 points 8 months ago (2 children)

hmm

you know, I'm not entirely opposed to the sentiment. but I know enough about the domain (and I should stress that I know only a very small amount) to know that it's something you want to do properly, and I expect a lot of people doing this would not be in that bucket

[–] froztbyte@awful.systems 12 points 8 months ago (1 children)

doubly so because "just fix the fucking US medical system" is the actual answer for a lot of the example targets

[–] mountainriver@awful.systems 7 points 8 months ago

They are trying to solve the problem of gigantic markups - because of legal monopolies through patents - on advanced chemical products where dosage and quality control is literally life or death. Their solution is to do it yourself in a garage.

Couldn't they at least tried parallel imports from quality controlled production in countries with less gigantic markups?

Or, if they could have stopped playing Robin Hood for a second and looked at the systemic problems instead, there was a proposal at WHO some ten years ago to reform pharmaceutical research and development with direct funding and then releasing the results, basically creating a direct to generica pipeline. The US shot it down, of course, which means public relations campaigns in the US would be great. But no, DIY in the garage.

(If anyone is interested in details the proposal was called "delinkage".)

[–] skillissuer@discuss.tchncs.de 9 points 8 months ago* (last edited 8 months ago) (3 children)

oh no i threw an offhand remark and now i have to elaborate

well i don't see their actual procedures anywhere so i'm going off my memory, they claimed to be able to make naloxone for example

this works like this: first react oxycodone with allyl bromide (chloride?), then add that quatenary ammonium salt to sodium ethylthiolate solution, and this should take off both N-methyl from ammonium and O-methyl from phenolate. peachy, there's your naloxone (do i need to draw this?)

and on paper it's all fine, but problems pile in. for example allyl bromide is fucking nasty, it's volatile and pretty toxic, you definitely don't want untrained people to handle it. (you can hardly catch me in a lab wearing labcoat, but this is one of reagents that i'll have it on, and handle only under fumehood) ethanethiol if mishandled will make entire city block think that there's a gas leak. and that's only starting materials, oxycodone is obviously active compound, and iirc they dissolved it in DMF, some polar solvents like this have that nasty property of basically carrying compounds through skin into the bloodstream (that's how gel patches with fentanyl or some hormones work). anyway,

that all works if it works, but for example, if allyl bromide somehow flies away, and it can because it's volatile, then you don't have what you think you have. that means that you have fucked up, and if you don't catch this and go through anyway, you'll get oxymorphone instead of naloxone, which has opposite pharmacological effect. that's pretty fucking bad thing, especially when it's advertised as "cook up your own opioid overdose antidote at home". second step might not work at all if solvents or glassware are wet for example, or there are problems with stirring. this is not hard hard chemistry, medchem is made this way to be as easy and reliable as it gets on purpose - to get rid of weird problems in industrial scale synthesis and such (organic chemists that do development of new reactions sometimes are opposed in industry if they introduce a new thing that makes everyone's life easier on basis that it's a new thing), but it has to be done right with proper analysis and i can't emphasize this enough. you can't do any analysis with glass jars and diy syringe pumps, probably the best thing is melting point determination and i wouldn't trust even that

there's a reason why it's laboratory glassware and not plasticware, i don't trust these 3d printed parts or silicone tubing to survive anything substantial, in lab we use glass, teflon, some stainless steel for less demanding things

then, where do you even get starting materials, purify them (harder than it sounds like, esp when you need to get water and oxygen out. this includes solvents), measure them out properly. making a miniature jacketed reactor is not the hard part of making chemistry, equivalent thing can be done with a rbf, some cooling or heating bath, thermometer and magnetic stirrer. you know, like in every lab scale synthesis from undergraduate up. the harder bit is the everything else aspect, it's a very much draw the rest of the fucking owl situation

second link is derek lowe's commentary on this, he's in that industry for decades by now, his response has a bit of that bitchy "we fearless capitalists made that thing" thing, he described himself a republican once (non-trumpist) but he's not an actual monster and he knows what he's talking about

i get where this sentiment comes from, but i think beehive is more reliable at this thing that thing tries to do, that is making sure that some people know how to make some pharmaceuticals (or similar) unofficially in their garage. but scope is different, compounds are different, target audience is different, and beehive expects some lab experience, won't hold your hand all the way and you're very much on your own, legally, organizationally, and such

[–] cstross@wandering.shop 8 points 8 months ago (2 children)

@skillissuer Ex-pharmacist here: this just touches the tip of the iceberg. For example, what's your source of pure oxycodone? If you're grinding up tablets, you're also getting bulk adjuvants, bursting agents, probably fragments of osmotic release membrane, and a bunch of other non-pharmacologically active chemicals. And what's your output? Sure naloxone is injectable, but you're going to want it dissolved in an isotonic solution, sterile, with preservatives, and a syringe ...

[–] cstross@wandering.shop 7 points 8 months ago

@skillissuer Basically a drug (chemical that has a pharmacological effect) is not a medicine (a formulation for delivering a drug to the tissue where it will have the desired pharmacological effect) any more than a naked internal combustion engine is an automobile.

load more comments (1 replies)
[–] skillissuer@discuss.tchncs.de 5 points 8 months ago

this all of course goes with a TINY LITTLE ASSUMPTION that whatever tries to do any of that even gets their hands on required starting materials. this can be harder than it sounds like

[–] Soyweiser@awful.systems 5 points 8 months ago

some polar solvents like this have that nasty property of basically carrying compounds through skin into the bloodstream

This reminded me how amazed I was to discover this was real. The first time I read about this was in a Shadowrun (the pnp rpg system) rulebook. Was fun to see they used real world stuff. (The book on 'why megacorps want access to stockmarkets illegally etc' book was a lot less fun to learn was based on real stuff)

[–] froztbyte@awful.systems 6 points 8 months ago

there’s that one libertarian weirdo who solved chemistry with iot

wait what, do tell

[–] mii@awful.systems 30 points 8 months ago* (last edited 8 months ago)

This is the next level of β€œI put my symptoms into Google and WebMD told me I have cancer”.

My compassion goes out to any doctors who now not only have to explain to several idiots every day that a slight pain in their pinky finger does not, in fact, mean they probably have ball cancer, but also that some vaguely professional sounding fluff disguised as a diagnosis generated by a chatbot also doesn’t mean they probably have ball cancer.

[–] blakestacey@awful.systems 28 points 8 months ago (3 children)

"I trained an AI on fan-wiki summaries of House episodes, and my friend needs treatment for lupus now!"

[–] silliewous@feddit.nl 15 points 8 months ago (1 children)

Worst. AI. Ever. It’s never Lupus.

[–] blakestacey@awful.systems 10 points 8 months ago* (last edited 8 months ago)

"Wait, are we after the second commercial break? Then the AI says it can't be lupus, and my friend's prostate is about to explode! Yes, it can tell that from a picture of his brain. And also it's flirting with Wilson, like, a lot."

[–] Soyweiser@awful.systems 10 points 8 months ago

Ha, this fits, I have noticed a tendency for 'AI give me a summary of this' transform content like 'this story is not about X, im not arguing X' into 'this story is arguing X'

[–] swlabr@awful.systems 6 points 8 months ago

β€œI trained my model on AO3, why aren’t we funding a cure for Hanahaki???”

[–] Deceptichum@sh.itjust.works 26 points 8 months ago (1 children)

Just looking at this persons recent comments, they’re clearly not worth the time of day at even being outraged at their stupidity in the first place.

[–] dgerard@awful.systems 29 points 8 months ago (1 children)

LOOK AT MY PREGNANT TRADWIFE MIDJOURNEY OUTPUT

[–] blakestacey@awful.systems 15 points 8 months ago (1 children)

July 2022: Midjourney released

March 2024: "pregnant tradwife Midjourney tweets" is an instantly recognizable type of guy

[–] hydroptic@sopuli.xyz 9 points 8 months ago (1 children)

April 2024: God dies of terminal cringing

[–] BlueMonday1984@awful.systems 7 points 8 months ago

May 2024: Things somehow get worse

[–] turkishdelight@lemmy.ml 25 points 8 months ago

Why are you expecting an LLM to be accurate in medical diagnosis?. Your expectations are wrong.

Oh great the newest use case for chatbots: pestering experts

[–] V0ldek@awful.systems 18 points 8 months ago

Jean Claude Van Scamme

[–] FermiEstimate@lemmy.dbzer0.com 17 points 8 months ago (1 children)

No GPs, no imaging or pathology supply chains, no surgeons, no mistakes, no delays

Now I'm imagining this guy trying to plug an ethernet cable into an MRI's helium line.

[–] bbuez@lemmy.world 9 points 8 months ago (1 children)

PC in the room with the MRI, hard drive now completely destroyed

load more comments (1 replies)
[–] Soyweiser@awful.systems 15 points 8 months ago* (last edited 8 months ago) (1 children)

Look at this tweet (from 2020) https://twitter.com/misha_saul/status/1321760123413360643 he hits all the marks. EA/Crypto/The Rationalistsphere/Musk.

Bonus points, comics! (Putting scarface on the most evil spot was a nice bit of bait however).

[–] froztbyte@awful.systems 13 points 8 months ago (1 children)

truly a renegade thinker. entirely unbecome by the petty mild thoughts of the masses, out on the edge

[–] Soyweiser@awful.systems 7 points 8 months ago

I love the added contrarianism/on the edge thinking of saying both sides are wrong about EA/Crypt/Musk/Etc. Truly revolutionary thinking.

[–] toiletobserver@lemm.ee 13 points 8 months ago

Every system is perfectly designed to get the result that it does.

  • W. Edwards Deming
[–] Holyginz@lemmy.world 11 points 8 months ago* (last edited 8 months ago)

Did he seriously say asymptote towards? He couldn't sound like more of a tool if he tried.

load more comments
view more: next β€Ί