←back to thread

467 points bundie | 3 comments | | HN request time: 0.506s | source
Show context
ryanrasti ◴[] No.44501761[source]
> With Gemini Apps Activity turned off, their Gemini chats are not being reviewed or used to improve our AI models.

Indeed bizarre as the statement doesn't say much about data collection or retention.

More generally, I'm conflicted here -- I'm big on personal privacy but the power & convenience that AI will bring will probably be too great to overcome. I'm hoping that powerful, locally-run AI models will become a mainstream alternative.

replies(4): >>44501830 #>>44501876 #>>44501881 #>>44501970 #
1. stingraycharles ◴[] No.44501881[source]
Maybe at some point, Apple is/was trying to do everything locally but it appears they have recently decided to move away from that idea and use OpenAI.

I can understand why: you’re only using locally-run AI models every so often (maybe a few times a day), but when you use it, you still want it to be fast.

So it will need to be a pretty heavy AI chip in your phone to be able to deliver that, which spends most of the time idling.

Since compute costs are insane for AI, it only makes sense to optimize this and do the inference in the cloud.

Maybe at some point local AI will be possible, but they’ll always be able to run much more powerful models in the cloud, because it makes much more sense from an economics point of view.

replies(2): >>44501962 #>>44502328 #
2. jpalawaga ◴[] No.44501962[source]
Google also has AI models optimized to run on phones, they're just in a lot better of a position to actually build purpose-built LLMs for phones.

It's not clear to me why certain classes of things still end up farmed out to the cloud (such as this, or is it?). Maybe their LLM hasn't been built in a very pluggable fashion.

3. Hizonner ◴[] No.44502328[source]
> they have recently decided to move away from that idea and use OpenAI.

... although, to be fair, they're negotiating with OpenAI to run the models in "secure enclaves", which should, assuming everything works right which is a huge assumption, keep Apple or anybody else from reaching inside and seeing what the model is "thinking about".