So that if there’s a big event like a Taylor Swift concert you couldn’t go to and they detected you’re most likely a fan, they’ll gen AI a photo of you at the concert
If you missed a Christmas gathering with your family, they’ll gen AI one for you
If like Japan but haven’t been there yet, they gen AI a whole vacation album of you there
They’re literally trying to embed fabricated events into your life and brain lmao
People are largely very excited for this idea. All the apps between different companies constantly copy each other to achieve feature parity, there’s no escape. You will live a life of luxury and excitement whether you want to or not
What a twisted “feature”. Ynow they could be working on device local models that categorise your photos making searching for something much easier but i guess thats too useful to do.
I don’t think any phones are close to powerful enough to do that yet are they? Isn’t computer vision really intensive or something
Yeah but surely can’t be much more power hungry than what your post proposes. Unless they’re planning on processing this on cloud compute at the businesses expense which just sounds like a way to burn millions for no good reason.
Also seemingly every single device is adding tensor cores these days so maybe in the future it could actually work locally. One can dream about AI producing at least 1 useful feature
Androids dominate the mobile market and the huge majority of androids are lower end devices. World isn’t just the West
And no big techs photo storage app is profitable by itself. It’s a loss leader, packages type of deal for all these companies. They’re already spending millions for hundreds of current models on every single photo and even video
ios actually does this, i can search for generic terms like “cat” or “panda” to find exclusively images/videos of those animals, but i can also search for a specific person (if their face is visible in the picture i want to search for) and they’re also categorized like that
I’m curious, iPhone might be powerful enough to run the models locally given enough time but do you know if it’s actually that or Apple runs the models on their servers and then sends you back the searchable tags from the models to attach to each respective photos metadata
there are also third party apps that use AI classification, and afaik it’s a local feature exposed by an ios api
oh shit, I’ve never used iOS but this looks cool af https://developer.apple.com/machine-learning/