this post was submitted on 30 Jul 2024
500 points (91.0% liked)

Firefox

17938 readers
11 users here now

A place to discuss the news and latest developments on the open-source browser Firefox

founded 4 years ago
MODERATORS
 

PSA (?): just got this popup in Firefox when i was on an amazon product page. looked into it a bit because it seemed weird and it turns out if you click the big "yes, try it" button, you agree to mandatory binding arbitration with Fakespot and you waive your right to bring a class action lawsuit against them. this is awesome thank you so much mozilla very cool

https://queer.party/@m04/112872517189786676

So, Mozilla adds an AI review features for products you view using Firefox. Other than being very useless, it's T&C are as anti-consumer as it possibly can be. It's like mozilla saying directly "we don't care about your privacy".

you are viewing a single comment's thread
view the rest of the comments
[–] antihumanitarian@lemmy.world -5 points 3 months ago (2 children)

Cool it with the universal AI hate. There are many kinds of AI, detecting fake reviews is a totally reasonable and useful case.

[–] teolan@lemmy.world 21 points 3 months ago (3 children)

I have large doubts on an AIs ability to reliably spot fakes.

[–] legion@lemmy.world 9 points 3 months ago

AI: "This is definitely a fake review because I wrote it."

[–] Xanis@lemmy.world 6 points 3 months ago (1 children)

There are literal bots on Reddit with less complexity able to measure the likelihood of a story being reliable and truthful, with facts and fact checkers. They're not always right, they ARE useful though. Or were. Not sure about now, been over a year since I left.

[–] unwarlikeExtortion@lemmy.ml 5 points 3 months ago

Would you mind pointing me in the direction of those AIs since the newfangled factcheck bot seems to just pull its data from a premade database, so no AI here on Lemmy

[–] antihumanitarian@lemmy.world 1 points 3 months ago

If by reliably you mean 99% certainty of one particular review, yeah I wouldn't believe it either. 95% confidence interval of what proportion of a given page's reviews are bots, now that's plausible. If a human can tell if a review was botted you can certainly train a model to do so as well.

[–] vrighter@discuss.tchncs.de 1 points 3 months ago (1 children)

but it does not work. This stuff never does.

[–] antihumanitarian@lemmy.world 2 points 3 months ago (1 children)

What do you mean by "this stuff?" Machine learning models are a fundamental part of spam prevention, have been for years. The concept is just flipping it around for use by the individual, not the platform.

[–] vrighter@discuss.tchncs.de 1 points 3 months ago

bayesian filtering, yes. Llm's? no