this post was submitted on 15 Jul 2023
505 points (95.8% liked)
Technology
59594 readers
3469 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You raise some great insights. As this tech becomes available to humanity, we cannot rely on the bias of one company to keep us safe. That doesn't mean "ethics in AI" is a mistake, though. (But that is an attention-grabbing phrase!). I believe you neglect what ethics fundamentally is: the way humans navigate one another. It's how we think and breathe. Ethics are core to our very existence, and not something that you can just pretend doesn't exist. Even saying nothing is a kind of response.
What all this means is that if we are designing technology that can teach anyone how to kill in ways they wouldn't otherwise have been able to, we have to address the realities of that conversation. That's a conversation that cannot be had just internally in one company, and I think we see eye to eye on that. But saying nothing?
Maybe ethics is a bit more complicated for this discussion, but it makes me think how do uncensored LLMs still have ethics, yet remain uncensored? Maybe there's a fine line somewhere. I can agree that it should be steered till more positive things, like saying murder and suicide is bad. The description of that model I linked says it's still influenced by ethics, but has the guardrails turned off, and maybe that would be a better idea then what I initially said.
Should custom models be allowed to be run or modified? Should these things be open source? I don't know the answer to all these questions, but I'll always advocate for foss and custom models, as I fundamentally see it as a tool that should be allowed to be owned. Which that is at odds with restrictive ethics rhetorics I hear.
But your second point that it shouldn't be taught to kill. I think that argument could be used to ban violent video games. You won't do very good in Overwatch or Valorant if you don't know how to kill after all. To learn how to hide a dead body, how much more detailed can you get then just turning on the TV and watching Criminal Minds? Our entertainment has zero issue teaching how to kill, encouraging violence (gotta rank up somehow), or hide dead body. Is an AI describing what this media already shows in text form so much worse?
Side note: that hyperlink I added links to the 33b uncensored WizardLM model which is pretty fun to play around with if you haven't already tried. Also GPT4All is a cool way to run various local models offline on your computer.
Whoa hold up. that's not what I said at all! I said if it is going to exist, what do we do about it?
My point is that this ethical conversation is already happening, we cannot change that. The issue is that OpenAI dominates the conversation. The solution cannot be "pretend there's nothing to talk about".