this post was submitted on 21 Oct 2024
525 points (98.0% liked)

Facepalm

2657 readers
1 users here now

founded 1 year ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] Gork@lemm.ee 179 points 1 month ago* (last edited 1 month ago) (4 children)

The solution here is obvious. Use ChatGPT to rebut her ChatGPT-generated arguments. Since it's now a bot arguing with a bot, it cancels out.

[–] superkret@feddit.org 56 points 1 month ago

Then while the bots are fighting, make out.

[–] boreengreen@lemm.ee 15 points 1 month ago* (last edited 1 month ago)

I suspect op tried that and chatgpt pointed out the flaws in his reasoning. It's not an option.

load more comments (2 replies)
[–] 0x0@lemmy.dbzer0.com 114 points 1 month ago* (last edited 1 month ago) (8 children)

The thing that people don't understand yet is that LLMs are "yes men".

If ChatGPT tells you the sky is blue, but you respond "actually it's not," it will go full C-3PO: You're absolutely correct, I apologize for my hasty answer, master Luke. The sky is in fact green.

Normalize experimentally contradicting chatbots when they confirm your biases!

[–] Classy@sh.itjust.works 13 points 1 month ago

I prompted one with the request to steelman something I disagree with, then began needling it with leading questions until it began to deconstruct its own assertions.

load more comments (7 replies)
[–] TheAlbatross@lemmy.blahaj.zone 108 points 1 month ago

Holy fuck I'd bail fuck that I wanna date a person not a computer program.

[–] edgemaster72@lemmy.world 83 points 1 month ago (1 children)

"If you love ChatGPT so much why don't you marry it!?"

[–] miseducator@lemmy.world 35 points 1 month ago (1 children)
load more comments (1 replies)
[–] jubilationtcornpone@sh.itjust.works 71 points 1 month ago* (last edited 1 month ago)

chatgpt says you're insecure

"jubilationtcornpone says ChatGpt is full of shit."

[–] Moah@lemmy.blahaj.zone 68 points 1 month ago (3 children)

Time to dump the middle woman and date chat got directly

load more comments (3 replies)
[–] IndiBrony@lemmy.world 67 points 1 month ago (3 children)

So I did the inevitable thing and asked ChatGPT what he should do... this is what I got:

[–] UnderpantsWeevil@lemmy.world 55 points 1 month ago (3 children)

This isn't bad on it's face. But I've got this lingering dread that we're going to state seeing more nefarious responses at some point in the future.

Like "Your anxiety may be due to low blood sugar. Consider taking a minute to composure yourself, take a deep breath, and have a Snickers. You're not yourself without Snickers."

[–] Starbuncle@lemmy.ca 30 points 1 month ago (1 children)

That's where AI search/chat is really headed. That's why so many companies with ad networks are investing in it. You can't block ads if they're baked into LLM responses.

[–] DempstersBox@lemmy.world 14 points 1 month ago

Ahh, man made horrors well within my comprehension

Ugh

[–] madjo@feddit.nl 14 points 1 month ago (2 children)

This response was brought to you by BetterHelp and by the Mars Company.

load more comments (2 replies)
load more comments (1 replies)
[–] Hotspur@lemmy.ml 23 points 1 month ago (7 children)

Yeah I was thinking he obviously needs to start responding with chat gpt. Maybe they could just have the two phones use audio mode and have the argument for them instead. Reminds me of that old Star Trek episode where instead of war, belligerent nations just ran a computer simulation of the war and then each side humanely euthanized that many people.

load more comments (7 replies)
[–] hungryphrog@lemmy.blahaj.zone 12 points 1 month ago

Yeah, ChatGPT is programmed to be a robotic yes-man.

[–] AVincentInSpace@pawb.social 62 points 1 month ago

"chatgpt is programmed to agree with you. watch." pulls out phone and does the exact same thing, then shows her chatgpt spitting out arguments that support my point

girl then tells chatgpt to pick a side and it straight up says no

[–] phoenixz@lemmy.ca 58 points 1 month ago (1 children)

This is a red flag clown circus, dump that girl

[–] Gradually_Adjusting@lemmy.world 12 points 1 month ago (1 children)

He should also dump himself. And his reddit account.

load more comments (1 replies)
[–] Rivalarrival@lemmy.today 57 points 1 month ago (4 children)

Two options.

  1. Dump her ass yesterday.

  2. She trusts ChatGPT. Treat it like a mediator. Use it yourself. Feed her arguments back into it, and ask it to rebut them.

Either option could be a good one. The former is what I'd do, but the latter provides some emotional distance.

[–] Species5218@sh.itjust.works 22 points 1 month ago (6 children)
  1. She trusts ChatGPT. Treat it like a mediator. Use it yourself. Feed her arguments back into it, and ask it to rebut them.

load more comments (6 replies)
[–] herrvogel@lemmy.world 12 points 1 month ago

I like that the couple's arguments becomes a proxy war between two instances of chatgpt.

load more comments (2 replies)
[–] GBU_28@lemm.ee 56 points 1 month ago (1 children)

Just send her responses to your own chatgpt. Let them duke it out

[–] mwproductions@lemmy.world 22 points 1 month ago (2 children)

I love the idea of this. Eventually the couple doesn't argue anymore. Anytime they have a disagreement they just type it into the computer and then watch TV together on the couch while ChatGPT argues with itself, and then eventually there's a "ding" noise and the couple finds out which of them won the argument.

[–] GBU_28@lemm.ee 13 points 1 month ago* (last edited 1 month ago) (1 children)

Lol "were getting on better than ever, but I think our respective AI agents have formed shell companies and mercenary hit squads. They're conducting a war somewhere, in our names, I think. It's getting pretty rough. Anyway, new episode of great British baking show is starting, cya"

load more comments (1 replies)
load more comments (1 replies)
[–] Muffi@programming.dev 52 points 1 month ago (3 children)

I was having lunch at a restaurant a couple of months back, and overheard two women (~55 y/o) sitting behind me. One of them talked about how she used ChatGPT to decide if her partner was being unreasonable. I think this is only gonna get more normal.

[–] Wolf314159@startrek.website 43 points 1 month ago (1 children)

A decade ago she would have been seeking that validation from her friends. ChatGPT is just a validation machine, like an emotional vibrator.

[–] Trainguyrom@reddthat.com 14 points 1 month ago

The difference between asking a trusted friend for advice vs asking ChatGPT or even just Reddit is a trusted friend will have more historical context. They probably have met or at least interacted with the person in question, and they can bring i the context of how this person previously made you feel. They can help you figure out if you're just at a low point or if it's truly a bad situation to get out of.

Asking ChatGPT or Reddit is really like asking a Magic 8 Ball. How you frame the question and simply asking the question helps you interrogate your feelings and form new opinions about the situation, but the answers are pretty useless since there's no historical context to base the answers off of, plus the answers are only as good as the question asked.

load more comments (2 replies)
[–] CrowAirbrush@lemmy.world 30 points 1 month ago (4 children)

I wouldn't want to date a bot extension.

[–] dragonfucker 28 points 1 month ago (1 children)

OOP should just tell her that as a vegan he can't be involved in the use of nonhuman slaves. Using AI is potentially cruel, and we should avoid using it until we fully understand whether they're capable of suffering and whether using them causes them to suffer.

[–] Starbuncle@lemmy.ca 20 points 1 month ago* (last edited 1 month ago) (11 children)

Maybe hypothetically in the future, but it's plainly obvious to anyone who has a modicum of understanding regarding how LLMs actually work that they aren't even anywhere near being close to what anyone could possibly remotely consider sentient.

[–] Cryophilia@lemmy.world 15 points 1 month ago

but it’s plainly obvious to anyone who has a modicum of understanding regarding how LLMs actually work

This is a woman who asks chatGPT for relationship advice.

load more comments (10 replies)
[–] netvor@lemmy.world 28 points 1 month ago

NTA but I think it's worth trying to steel-man (or steel-woman) her point.

I can imagine that part of the motivation is to try and use ChatGPT to actually learn from the previous interaction. Let's leave the LLM out of the equation for a moment: Imagine that after an argument, your partner would go and do lots of research, one or more of things like:

  • read several books focusing on social interactions (non-fiction or fiction or even other forms of art),
  • talk in-depth to several experienced therapist and/or psychology researchers and neuroscientists (with varying viewpoints),
  • perform several scientific studies on various details of interactions, including relevant physiological factors, Then after doing this ungodly amount of research, she would go back and present her findings back to you, in hopes that you will both learn from this.

Obviously no one can actually do that, but some people might -- for good reason of curiosity and self-improvement -- feel motivated to do that. So one could think of the OP's partner's behavior like a replacement of that research.

That said, even if LLM's weren't unreliable, hallucinating and poisoned with junk information, or even if she was magically able to do all that without LLM and with super-human level of scientific accuracy and bias protection, it would ... still be a bad move. She would still be the asshole, because OP was not involved in all that research. OP had no say in the process of formulating the problem, let alone in the process of discovering the "answer".

Even from the most nerdy, "hyper-rational" standpoint: The research would be still an ivory tower research, and assuming that it is applicable in the real world like that is arrogant: it fails to admit the limitations of the researcher.

[–] Thcdenton@lemmy.world 26 points 1 month ago (5 children)

I'm a programmer, I've already argued with chatgot more than any woman.

load more comments (5 replies)
[–] synae@lemmy.sdf.org 23 points 1 month ago

South park did it

[–] MooseTheDog@lemmy.world 23 points 1 month ago

She's training herself on AI generated output. We already know what happens when AI trains on AI

[–] skvlp@lemm.ee 22 points 1 month ago (4 children)

Ok, is this a thing now? I don’t think I’d want to be in what is essentially a relationship with chat GPT…

[–] Contramuffin@lemmy.world 11 points 1 month ago (1 children)

Yes... I know some people who rely exclusively on Chatgpt to meditate their arguments. Their reasoning is that it allows them to frame their thoughts and opinions in a non-accusatory way.

My opinion is that chatgpt is a sycophant that just tries to agree with everything you say. Garbage in, garbage out. I suppose if the argument is primarily emotionally driven, with minimal substance, then having Chatgpt be the mediator might be helpful.

load more comments (1 replies)
load more comments (3 replies)
[–] HubertManne@moist.catsweat.com 22 points 1 month ago

my wife likes to jump from one to another when I try and delve into any particular aspect of an argument. I guess what im saying is arguments are going to always suck and not necessarily be rationale. chatgpt does not remember every small detail as she is the one inputting the detail.

[–] Kolanaki@yiffit.net 21 points 1 month ago (2 children)

"Guinan from my Star Trek AI chatbot says you're acting immature!"

load more comments (2 replies)
[–] SkyNTP@lemmy.ml 18 points 1 month ago

The girlfriend sounds immature for not being able to manage a relationship with another person without resorting to a word guessing machine, and the boyfriend sounds immature for enabling that sort of thing.

[–] Reygle@lemmy.world 18 points 1 month ago (1 children)

"I use ChatGPT for" <- at this point I've already tuned out, the person speaking this is unworthy of attention

[–] parody@lemmings.world 13 points 1 month ago

“…for trying to understand sarcasm as an autistic person”

“…for translation until I find DeepL“

“…short circuiting negative thought loops”

(JK, probably to do a bad job at something stupid)

load more comments
view more: next ›