ITT people who don't understand that generative ML models for imagery take up TB of active memory and TFLOPs of compute to process.
Google Pixel
The World's Google Pixel community!
This community is for lemmings to gather and discuss all things related to the Google Pixel phone and other related hardware. Feel free to ask questions, seek advice, and engage in discussions around the Pixel and its ecosystem.
We ask you to be polite when addressing others and respect Lemmy.world's rules.
NSFW content is not allowed and will immediately get you banned.
It also goes without saying that self-promotion of any nature and referral links are not allowed. When in doubt, contact the mod team first.
Also, please, no politics.
For more general Android discussions, see !android@lemmy.world.
This community is not in any way affiliated with Google. If you are looking for customer support regarding your Pixel phone, look here instead: https://support.google.com/pixelphone/
That's wrong. You can do it on your home PC with stable diffusion.
And a lot of those require models that are multiple Gigabytes in size that then need to be loaded into memory and are processed on a high end video card that would generate enough heat to ruin your phones battery if they could somehow shrink it to fit inside a phone. This just isn't feasible on phones yet. Is it technically possible today? Yes, absolutely. Are the tradeoffs worth it? Not for the average person.
You can for example run some upscaling models on your phone just fine (I mentioned the SuperImage app in the photography tips megathread). Yes the most powerful and memory-hungry models will need more RAM than what your phone can offer but it's a bit misleading if Google doesn't say that those are being run on the cloud.
So much for the brilliant AI-specialized Tensor processor
It's basically just a mediocre processor that offloads interesting things to the mothership.
Yep. What a joke. Goes to show you that google could Make these pixel features available to all android devices if they wanted to.
This really doesn't surprise me.
Yeah, obviously. The storage and compute required to actually run these AI generative models is absolutely massive, how would that fit in a phone?
They aught to just slap a phone in a 3090
Fuuuuck that.
Why ?
Because it's a privacy nightmare.
Using Google products has always been a "privacy nightmare" - it's not like this is some mega open source phone or anything it's literally Google's flagship. Is this really surprising? Playing with fire gets you burned.
Googles phones are the easiest to de-google surprisingly
Even ignoring all the privacy issues with that, it's kinda shit to unnecessarily lose phone features when you've got no signal
Isn't that kinda the dream. We have devices that remote the os. So we get a super powerful device that keeps getting updated and upgraded. We just need a receiver ?
Isn't that what we want. Can reduce down the bulk on devices. Just a slab with battery, screen and modem and soc that can power the remote application ?
Sometimes that's what people dream about. On the other hand that hybrid cloud model is giving up the last remnants of computing autonomy and control over devices we own.
What I would like is something that gives me the framework to host my own server-side computations at home.
That would be, if Google wasn't constantly killing things that didn't do good enough. Especially given how expensive generative AI can be to run remotely. Just look at what happened with Stadia
Also, it just feels disappointing. Ever since chatGPT, they've been pouring near infinite budget into stuff like this by hiring the top talent, and working them to the latest hours of the night. And the best that they could come up with for the pixel 8 is feeding it data from the cloud.
And I can't even really believe the whole "consumer hardware isn't powerful enough" thing given that there's quite a few ARM processors, Apple especially, That's been able to build consumer hardware capable of performing generative AI (I've personally been able to run stable, diffusion and whisper on my M1 MacBook). Maybe not at the scale or quality of the cloud, but still capable of doing so regardless.
I mean it sucks for offline situations or for actions that need very good latency.
But the phone's battery would be happier if the processing is not done locally.
For some things I prefer to do the stuff locally, for other things on the cloud.
Some cloud Vs local examples I'm thinking about:
-
For example (example not related to the Pixel) if I'm generating an image with stable diffusion at home I prefer to use my RX 6800 on my private local Linux rather than a cloud computer with a subscription. But if I had to do the same on a mobile phone with tiny processing power and battery capacity I'd prefer to do it on the cloud.
-
(Another non AI example): for gaming, I prefer to run things natively. At least until it is possible to stream a game without added latency. Obviously I don't play games on a phone.
-
Another example (Google photos): I would prefer to connect Google photos to a local NAS server at home, and "own the storage on premise" and then pay a lower fee to Google photos for the extra services it brings. Browsing, face and objects recognition, etc. But this is not an option. With this I would be able invest on my own storage, even if I had dozens of gigabytes in 4K60FPS videos.
This just strengthens the argument to install privacy/security first operating systems like CalyxOS and GrapheneOS. I don't was a phone that's more a service subscription than it is hardware. I have the pixel 8 and didn't get the pro due to the offloading to Google servers for some "features".
Just waiting for GrapheneOS to be released for the 8... Until then, I'm sitting uncomfortable knowing my phone is uploading telemetry to Google servers...
I've bitten the bullet, replaced my Redmi 13 to a Pixel 7 + GrapheneOS - because the MIUI spyware and google spyware are just too much... I still don't get why , as an owner of a phone , I'm not the owner of the phone.
Currently using GrapheneOS on my 8 Pro, and the experience has been extremely smooth so far.
Yea I was tempted for a second to get it, but there was a comment that it "should" be possible to upgrade to the released version when available, I don't want to have to mess around in the event that it's not. So I'm just going to wait a week or so.
And?