this post was submitted on 26 Aug 2023
397 points (85.6% liked)
Technology
59657 readers
2930 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I have been getting surveys asking my opinion on ai as a healthcare practitioner (pharmacist). I feel like they are testing the waters.
AI is really dangerous for healthcare right now. I'm sure people are using it to ask regular questions they normally Google. I'm sure administrators are trying to see how they can use it to "take the pressure off" their employees (then fire some employees to "tighten the belt").
If they can figure out how to fact check the AI results, maybe my opinion can change, but as long as AI can convincingly lie and not even know it's lying, it's a super dangerous tool.
For me the issue isn't the tool. It's people. The tool is used just as it is. A tool.
I always like to compare these things to other physical tools. If you take a philips screw driver to a flathead screw you don't blame the tool you blame yourself for bringing the improper tool because as a human you can make mistakes. As a human you should have figured out prior, "do I need a flathead or philips?" There are tools capable of doing the job and doing it properly.
Same if you are an operator on a piece of machinery. If you take a forklift to destroy a house you probably aren't going to get very far.
All of these tools were designed to make life easier and provide a positive to life when doing something but it is how you use the tool that matters.
The same with a gun. I am not a gun ownership kind of guy because of all the shit human beings that just can't use one properly or claim to use it properly. Guns get more complicated and so do their use cases but the truth is a gun was designed to kill or defend from being killed (this is not a topic about gun rights just using it as an example.) However, in the hands of the wrong person a gun can kill unintentionally. That isn't the guns fault after all its design was to kill.
ChatGPT wasn't designed to kill, inherently. It wasn't designed to do anything other than take databases of information and provide what it thinks is correct. If you as a person don't know how to use it or what to do with it probably and you aren't seeking actual medical attention or advice from a professional then I think that is the person's fault.
ChatGPT can't make a disclaimer for every little thing. A car on the other hand having a recall issue can. If you want to compare to a faulty part in a car then sure. Modify ChatGPT to just not provide medical advice.
See tools can be changed midway through. The tool isn't the problem how the person uses the tool is the issue. Access to that tool and what that tool has access to can be an issue but the great thing about tools is laws can change and tools can change.
It isn't the A.I.s fault if your legislature doesn't care to enforce that change or law. The same legislature that half of Lemmy is opposed to literally all the time. Tools are only good in ways they can be used as well.
So let's say for arguments sake the tool is dangerous and in your defense it absolutely can be used dangerously. Do you call upon the government to shut it down just like you would call upon the government to regulate or change gun laws?
Do you also ignore the positive impacts ChatGPT can have because it is doing something else terribly? Imagine a system that medical professionals do create and they modify a version that does provide good medical advice, accurate, and professional? What then? Is ChatGPT still bad? It's not out of the realm of possibility. A.I isn't the enemy because someone's leadership decided to fire you. Leadership is the enemy. Tools are only as bad as the people using them.
Or for the sake of a recalled care that can kill they are as bad as the user manufacturing them. I don't deny you can get a bad car, a bad screwdriver. My point is if you let the bad outweigh the good then you are missing the point. The bad should be handled by people who understand it better and can design laws and tools to enforce better usage to make something less bad. So again don't blame the tool blame the people that aren't protecting you with said tool.