Google says that SafetyCore “provides on-device infrastructure for securely and privately performing classification to help users detect unwanted content
Cheers Google but I'm a capable adult, and able to do this myself.
This is a most excellent place for technology news and articles.
Google says that SafetyCore “provides on-device infrastructure for securely and privately performing classification to help users detect unwanted content
Cheers Google but I'm a capable adult, and able to do this myself.
Per one tech forum this week
Stop spreading misinformation.
To quote the most salient post
The app doesn't provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.
Which is a sorely needed feature to tackle problems like SMS scams
If the app did what op is claiming then the EU would have a field day fining google.
Thnx for this, just uninstalled it, google are arseholes
People don't seem to understand the risks presented by normalizing client-side scanning on closed source devices. Think about how image recognition works. It scans image content locally and matches to keywords or tags, describing the person, objects, emotions, and other characteristics. Even the rudimentary open-source model on an immich deployment on a Raspberry Pi can process thousands of images and make all the contents searchable with alarming speed and accuracy.
So once similar image analysis is done on a phone locally, and pre-encryption, it is trivial for Apple or Google to use that for whatever purposes their use terms allow. Forget the iCloud encryption backdoor. The big tech players can already scan content on your device pre-encryption.
And just because someone does a traffic analysis of the process itself (safety core or mediaanalysisd or whatever) and shows it doesn't directly phone home, doesn't mean it is safe. The entire OS is closed source, and it needs only to backchannel small amounts of data in order to fuck you over.
Remember the original justification for clientside scanning from Apple was "detecting CSAM". Well they backed away from that line of thinking but they kept all the client side scanning in iOS and Mac OS. It would be trivial for them to flag many other types of content and furnish that data to governments or third parties.
I didn't see it anywhere on my phone but ill look into it more after work. Thanks for the heads up.
SafetyCore Placeholder so if it ever tries to reinstall itself it will fail due to signature mismatch.
I didn't have it in my app drawer but once I went to this link, it showed as installed. I un-installed it ASAP.
https://play.google.com/store/apps/details?id=com.google.android.safetycore&hl=en-US
I also reported it as hostile and inappropriate. I am sure Google will do fuck all with that report but I enjoy being petty sometimes
Gimme Linux phone, I’m ready for it.
The Firefox Phone should've been a real contender. I just want a browser in my pocket that takes good pictures and plays podcasts.
The app can be found here: https://play.google.com/store/apps/details?id=com.google.android.safetycore
The app reviews are a good read.
Thanks for the link, this is impressive because this really has all the trait of spyware; apparently it installs without asking for permission ?
Yup, heard about it a week or two ago. Found it installed on my Samsung phone, it never asked for permissions or gave any info that it was added to my phone.
Google says that SafetyCore “provides on-device infrastructure for securely and privately performing classification to help users detect unwanted content. Users control SafetyCore, and SafetyCore only classifies specific content when an app requests it through an optionally enabled feature.”
GrapheneOS — an Android security developer — provides some comfort, that SafetyCore “doesn’t provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.”
But GrapheneOS also points out that “it’s unfortunate that it’s not open source and released as part of the Android Open Source Project and the models also aren’t open let alone open source… We’d have no problem with having local neural network features for users, but they’d have to be open source.” Which gets to transparency again.
For people who have not read the article:
Forbes states that there is no indication that this app can or will "phone home".
Its stated use is for other apps to scan an image they have access to find out what kind of thing it is (known as "classification"). For example, to find out if the picture you've been sent is a dick-pick so the app can blur it.
My understanding is that, if this is implemented correctly (a big 'if') this can be completely safe.
Apps requesting classification could be limited to only classifying files that they already have access to. Remember that android has a concept of "scoped storage" nowadays that let you restrict folder access. If this is the case, well it's no less safe than not having SafetyCore at all. It just saves you space as companies like Signal, WhatsApp etc. no longer need to train and ship their own machine learning models inside their apps, as it becomes a common library / API any app can use.
It could, of course, if implemented incorrectly, allow apps to snoop without asking for file access. I don't know enough to say.
Besides, you think that Google isn't already scanning for things like CSAM? It's been confirmed to be done on platforms like Google Photos well before SafetyCore was introduced, though I've not seen anything about it being done on devices yet (correct me if I'm wrong).
Forbes states that there is no indication that this app can or will "phone home".
That doesn't mean that it doesn't. If it were open source, we could verify it. As is, it should not be trusted.
I switched over to GrapheneOS a couple months ago and couldn't be happier. If you have a Pixel the switch is really easy. The biggest obstacle was exporting my contacts from my google account.
laughs in GrapheneOS
Thanks for bringing this up, first I've heard of it. Not present on my GrapheneOS pixel, present on stock.
I suppose I should encourage pixel owners to switch from stock to graphene, I know which decide I rather spend time using. GrapheneOS one of course.
I just un-installed it
Anyone know what Android System Intelligence does? Should that be un-installed as well?
Thanks. Just uninstalled. What a cunts
I uninstalled it, and a couple of days later, it reappeared on my phone.
Do we have any proof of it doing anything bad?
Taking Google's description of what it is it seems like a good thing. Of course we should absolutely assume Google is lying and it actually does something nefarious, but we should get some proof before picking up the pitchforks.
Google is always 100% lying.
There are too many instances to list and I'm not spending 5 hours collecting examples for you.
They removed don't be evil long time ago
They removed don’t be evil long time ago
See, this is why I like proof. If you go to Google's Code of Conduct today, or any other archived version, you can see yourself that it was never removed. Yet everyone believed the clickbait articles claiming so. What happened is they moved it from the header to the footer, clickbait media reported that as "removed" and everyone ran with it, even though anyone can easily see it's not true, and it takes 30 seconds to verify, not even 5 hours.
Years later you are still repeating something that was made up just because you heard it a lot.
Of course Google is absolutely evil and the phrase was always meaningless whether it's there or not, but we can't just make up facts just because it fits our world view. And we have to be aware of confirmation bias. Yeah Google removing "don't be evil" sounds about right for them, right? It makes perfect sense. But it just plain didn't happen.
More information: It's been rolling out to Android 9+ users since November 2024 as a high priority update. Some users are reporting it installs when on battery and off wifi, unlike most apps.
App description on Play store: SafetyCore is a Google system service for Android 9+ devices. It provides the underlying technology for features like the upcoming Sensitive Content Warnings feature in Google Messages that helps users protect themselves when receiving potentially unwanted content. While SafetyCore started rolling out last year, the Sensitive Content Warnings feature in Google Messages is a separate, optional feature and will begin its gradual rollout in 2025. The processing for the Sensitive Content Warnings feature is done on-device and all of the images or specific results and warnings are private to the user.
Description by google Sensitive Content Warnings is an optional feature that blurs images that may contain nudity before viewing, and then prompts with a “speed bump” that contains help-finding resources and options, including to view the content. When the feature is enabled, and an image that may contain nudity is about to be sent or forwarded, it also provides a speed bump to remind users of the risks of sending nude imagery and preventing accidental shares. - https://9to5google.com/android-safetycore-app-what-is-it/
So looks like something that sends pictures from your messages (at least initially) to Google for an AI to check whether they're "sensitive". The app is 44mb, so too small to contain a useful ai and I don't think this could happen on-phone, so it must require sending your on-phone data to Google?