But I seriously do not want any agent knowing too much.
For example: “Onscreen awareness” — Giving Siri awareness of whatever is displayed on your screen.
Siri had better not be shy.
Or perhaps Siri ought to take a long walk on a short pier.
Same for Copilot,I suppose.
Life is a creative work. It's art, not algorithm. I take my camera with me to be surprised. I play piano to be part of the process that a composer felt.
If I hand off my email writing, photography, music and so on, I'm not learning. Great results but not mine. Just LIKE mine.
AI, please paint my house. It bores and taxes me. Life doesn't.
Sometimes it’s better to wait 6 months for something to be mature and stable. Apple has a long history of being conservative in this regard, and they often aren’t the first entrant into a market, but they still dominate because of very good integration with their ecosystem and a high quality bar. Apple Intelligence will probably be the same (bookmark this post for 3 years from now, when Gemini is being split into three businesses and rebranded / killed).
When they botched Apple Maps one of the top guys lost his job really quickly. This is a much bigger scandal in my opinion. Selling hardware based on software that won't exist. And the small parts of Apple Intelligence they have managed to ship are awful.
They should go back to not announcing this stuff in advance. The big features should remain secret until the OS is ready to launch. Sure it means developers won't be able to integrate with them for another 6-12 months but it's much better than where they find themselves now.
USB-C Invented by Apple? AirPod selling at cost or loss? And countless other things.
I'd say less "fundamentally fan of ..." and more "deeply understands ..."
We talk about what happens when people put unrealistic standards on themselves they can't live up to, and the self-destructive cycles that can follow; but what happens when it happens to companies?
When a company defines themselves internally as "we're #1, we'll always be #1, because we're the best, we can always get away with these margins"? If the company starts losing, or falling behind, or is criticized for not innovating, they don't know how to cope. They start overcorrecting, overreacting, promising things early, overestimating their abilities, following trends... pride kills companies, just like people.
We live in a time in tech, where it seems to just be beginning at Apple, but ironically, their old partner Intel got so high on that feeling during the 2010s, then was burned to the point of struggling to maintain its existence as a unified entity. Apple in the 90s, and Intel right now, should be a massive warning to Apple that nothing is infallible.
This isn't that. This aspect of Apple Intelligence has nothing visible. They didn't announce it ahead of time so that developers could start hooking into it; there's still nothing for developers to do here.
This was a bad decision, apparently done purely for its short-term marketing potential, and it does, indeed, make me concerned for what's going on inside Apple.
This is a particularly sharp example though.
Agree and but it took me a while to realize this. He definitely gives them the benefit of the doubt, and but I think that's okay.
But most famously called Jobs saying you didn’t need apps on the phone a “shit sandwich”
I can easily see a large, powerful group inside Apple proclaiming AI is a dead end (either for Apple, or just in general).
I can also see a large, powerful group inside Apple proclaiming AI is the only future and they need to figure out how to integrate it into a company with their culture and brand perception with minimal damage to both.
I can see a large, powerful group inside Apple saying that nobody cares about either of those things, we have to do something big with AI because it's what the market expects.
It's interesting that this seems to be the first time Apple has been faced with a threat of unproven credibility, that also carries enough momentum that they "must" respond without full information.
I agree with most of the points in TFA, I can't necessarily point to something in it that I think is wrong. But as with so many conversations about AI at the moment, it seems to leave a lot of people asking "what the hell happened?", often about things that have usually been considered table stakes - or at least reasonable expectations - in the past.