...except for the motion-activated lighting in our foyer and laundry room. $15, 15 minutes to install, no additional charges, no external services, no security issues, and just works year after year with 100% reliability.
I want to reach for my tools when I want to use them.
Of course this is going to be spun and turned into a negative, but I basically want ML to be invisible again. The benefits being clear, but the underlying tech no longer mattering.
Remember the term "smart" as applied to any device or software mode that made ~any assumptions beyond "stay on while trigger is held"? "AI" is the new "smart." Even expert systems, decision trees, and fulltext search are "AI" now.
also hard to find a better laptop for running an LLM locally too
The expectation is that Apple will eventually launch a revolutionary new product, service or feature based around AI. This is the company that envisioned the Knowledge Navigator in the 80s after all. The story is simply that it hasn't happened yet. That doesn't make it a non-story, simply an obvious one.
Not really, I'm taking the hint. If they call a feature "AI", there's a 99% chance it's empty hype. If they call a feature "machine learning", there may be something useful in there.
Notice how Apple, in this event even, uses the term "machine learning" for some features (like some of their image processing stuff) and "AI" for other features. Their usage of the terms more or less matches my line of features I want and features I don't want.
They’ve been putting AI in a lot of places over the years.
I feel like he’d be obsessively working to combine AI, robotics and battery technology into the classic sci fi android.
Instead, modern Apple seems to be innovating essentially nothing unless you count the VR thing and the rumors of an Apple car, which sounds to me much like the Apple Newton.
* Apps are already logged in, so no extra friction to grant access.
* Apps mostly use Apple-developed UI frameworks, so Apple could turn them into AI-readable representations, instead of raw pixels. In the same way a browser can give the AI the accessibility DOM, Apple could give AIs an easier representation to read and manipulate.
* iPhones already have specialized hardware for AI acceleration.
I want to be able to tell my phone to a) summarize my finances across all the apps I have b) give me a list of new articles of a certain topic from my magazine/news apps c) combine internet search with on-device files to generate personal reports.
All this is possible, but Apple doesn't care to do this. The path not taken is invisible, and no one will criticize them for squandering this opportunity. That's a more subtle drawback with only having two phone operating systems.
Edit: And add strong controls to limit what it can and cannot access, especially for the creepy stuff.
ChatGPT being the number one app is a weird way for people to express they don't trust AI: https://apps.apple.com/us/charts/iphone
Apps already have such an accessibility tree; it's used for VoiceOver and you can use it to write UI unit tests. (If you haven't tested your own app with VoiceOver, you should.)
This really is the problem. Why do I spend hundreds of dollars more for specialized hardware that’s better than last years specialized hardware if all the AI features are going to be an API call to chatGPT? I am pretty sure I don’t need all of that hardware to watch YouTube videos or scroll Instagram/web, which is what 95% of the users do.
But that's not true of any other actor in the market. Everyone else — but especially venture-backed companies trying to get/retain investor interest — are still trying to find a justification for calling every single thing they're selling "AI".
(And it's also not even true of Apple themselves as recently as six months ago. They were approaching their marketing this way too, right up until their whole "AI" team crashed and burned.)
Apple-of-H2-2025 is literally the only company your heuristic will actually spit out any useful information for. For everything else, you'll just end up with 100% false positives.
A big issue to solve is battery life. Right now there's already a lot that goes on at night while the user sleeps with their phone plugged in. This helps to preserve battery life because you can run intensive tasks while hooked up to a power source.
If apps are doing a lot of AI stuff in the course of regular interaction, that could drain the battery fairly quickly.
Amazingly, I think the memory footprint of the phones will also need to get quite a bit larger to really support the big uses cases and workflows. (I do feel somewhat crazy that it is already possible to purchase an iPhone with 1TB of storage and 8GB of RAM).
https://www.bhphotovideo.com/c/product/1868375-REG/sandisk_s... 2TB $185
https://www.bhphotovideo.com/c/product/1692704-REG/sandisk_s... 1TB $90
https://www.bhphotovideo.com/c/product/1712751-REG/sandisk_s... 512GB $40
Anyway, it's not the same thing: I'm fine with machine learning to give me better image search results, I'm not fine with machine learning to generate "art" or machine learning to generate text. Everyone has collectively agreed to call the latter "AI" rather than machine learning, so the term is a useful distinction.
just using "AI" as term... they are so on the forefront that they sent your data to ChatGPT, otherwise you would be too ahead of the pack...
So was last year’s, technically, but that didn’t stop apple from making it all about AI.
THAT would make me take an upgrade. Until then, I'm just keeping this phone until it goes out of support.
Instead, we got, what? An automated memeoji maker? Holy hell they dropped the ball on this.