this post was submitted on 04 Aug 2024
329 points (98.8% liked)

Programming

17476 readers
141 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
 

researchers conducted experimental surveys with more than 1,000 adults in the U.S. to evaluate the relationship between AI disclosure and consumer behavior

The findings consistently showed products described as using artificial intelligence were less popular

“When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions,”

top 24 comments
sorted by: hot top controversial new old
[–] magic_lobster_party@kbin.run 85 points 3 months ago (2 children)

Using AI in the marketing is a sign you don’t have much else to show for. People see through this. Your product should be strong even without having to mention AI.

[–] marcos@lemmy.world 39 points 3 months ago

Using AI in the marketing is a sign you don’t have much else to show for.

To me it's a sign that you have spyware included, will depend on a perfect network infrastructure, and will stop working in 2 years.

But my guess is that for most people it's a sign the product is made by some psychopath-like company like Facebook.

[–] MonkderVierte@lemmy.ml 16 points 3 months ago* (last edited 3 months ago) (1 children)

That and LLM confidently making up "facts". And since LLM is the AI with most direct exposure to the user, this is what happens.

[–] magic_lobster_party@kbin.run 7 points 3 months ago (2 children)

I believe there’s use of LLMs beyond being “fact bots”. I see it more as a “universal text processor”. Like you already have a text, and you want to have it written in a different style or language. Or extract pieces of information from a text to something machine readable. Or maybe convert instructions in natural language to machine instructions.

All the facts are at hand. It just converts the given information to something else.

[–] Kissaki@programming.dev 4 points 3 months ago (1 children)

At work, we recently talked about AI. One use case mentioned (by an AI consulting firm, not us or actually suggested for us) was meeting summaries and extracting TODOs from them.

My stance is that AI could be useful for summaries about topics so you can see what topics were being talked about. But I would never trust it with extracting the or all significant points, TODOs, or agreements. You still need humans to do that, and have explicit agreement and confirmation of the list in or after the meeting.

It can also help to transcribe meetings. It could even translate them. Those things can be useful. But summarization should never be considered factual extraction of the significant points. Especially in a business context, or anything else where you actually care about being able to trust information.

I wouldn't [fully] trust it with transforming facts either. It can work where you can spot inaccuracies (long text, lots of context), or where you don't care about them.

Natural language instructions to machine instructions? I'd certainly be careful with that, and want to both contextualize and test-confirm it works well enough for the use case and context.

[–] magic_lobster_party@kbin.run 2 points 3 months ago

Natural language instructions to machine instructions? I'd certainly be careful with that, and want to both contextualize and test-confirm it works well enough for the use case and context.

I’m imagining it to be quite limited. Mostly to talk with appliances in a way that’s more advanced than today. Instructions like “gradually dim down the lights in living room until bed time”, or “dim down the lights in the living room when the we watch a movie on TV”.

[–] pkill@programming.dev 2 points 3 months ago

we had a plenty of more deterministic tools for parsing human readable text to machine-readable long before llms

[–] Hugh_Jeggs@lemm.ee 48 points 3 months ago (2 children)

Well duh. Stupid people are scared of the phrase, and clever people are annoyed by it. That just leaves the Belgians

[–] Gsus4@programming.dev 25 points 3 months ago (2 children)

That sounds like a dutch joke.

[–] Hugh_Jeggs@lemm.ee 11 points 3 months ago

Sounds like something a Belgian would say 🧐

[–] xilliah@beehaw.org 6 points 3 months ago* (last edited 3 months ago)

Do you know how copper wire was invented? Two Belgians found a penny on the ground.

[–] Anticorp@lemmy.world 1 points 3 months ago

The Belgians are fine, but we don't want the Dutch!

[–] MrFappy@lemmy.world 43 points 3 months ago (1 children)

I honestly see thing touting AI to be an absolute gimmick, and beyond untrustworthy. So this definitely tracks. It’s been shown that AI isn’t at a level where using it for anything isn’t beneficial, in fact it’s the contrary. Marketers think that folks will see AI as making all things better, but if google’s recent implementation is any indicator, it’s something to steer clear of for the foreseeable future.

[–] Kissaki@programming.dev 4 points 3 months ago* (last edited 3 months ago)

It’s been shown that AI isn’t at a level where using it for anything isn’t beneficial, in fact it’s the contrary.

Maybe you're thinking of something more specific than me, but I don't think that's the case. What is being called AI is a broad field.

I think what Opus was able to implement for high packet-loss voice transmission is exceptional.

I also find Visual Studio in-line-inline-completions to be very useful.

That's far from the typical Chatbots and whatnot though. Which often suck.

[–] valkyre09@lemmy.world 39 points 3 months ago (1 children)

I was on Amazon the other day buying a replacement light.

A little banner on the item description advertised the new 2024 model. It has “ai integration” of some sort. Same price. I actively chose the older model. I can’t be the only one who thinks like this.

[–] ripcord@lemmy.world 34 points 3 months ago (1 children)

I can’t be the only one who thinks like this.

I read an article recently that said this is true. Will try to find it.

[–] Saber_is_dead@lemmy.world 11 points 3 months ago

you know... I think I saw a comment on here not too long ago about this kind of thing as well.

[–] nitefox@sh.itjust.works 39 points 3 months ago

Yeah, I also never buy products that would work perfectly fine with a local network but that somehow require a connection to remote servers and whatnot.

Beside, most of these Ai labels are just ol’ plain algorithms lol

[–] gpopides@lemmy.world 24 points 3 months ago

AI in the product name or description makes sure that there is not a single chance I buy it.

It makes filtering products and companies easier

[–] nous@programming.dev 11 points 3 months ago (1 children)

But it increases investors likelihood to invest... That is all that really matters these days.

[–] talkingpumpkin@lemmy.world 4 points 3 months ago (1 children)

Does it still? Looks like the bubble is about to explode

[–] nous@programming.dev 6 points 3 months ago

Well, that is how bubbles form. People will stop investing while/after it has burst. That is basically the definition of a bubble bursting.

[–] Anticorp@lemmy.world 8 points 3 months ago

Yes, same thing with "smart", or "app controlled". What I read with any of those terms is "spies on you".

[–] MrLLM@ani.social 8 points 3 months ago

I hope they don’t start using ML or my name, otherwise I’ll be ruined.