←back to thread

292 points kaboro | 4 comments | | HN request time: 0.807s | source
Show context
klelatti ◴[] No.25058716[source]
> it is possible that Apple’s chip team is so far ahead of the competition, not just in 2020, but particularly as it develops even more powerful versions of Apple Silicon, that the commoditization of software inherent in web apps will work to Apple’s favor, just as the its move to Intel commoditized hardware, highlighting Apple’s then-software advantage in the 00s.

I think Ben is missing something here: that the speed and specialist hardware (e.g. neural engine) on the new SoCs again give developers of native apps the ability to differentiate themselves (and the Mac) by offering apps that the competition (both web apps and PCs) can't. It's not just about running web apps more quickly.

replies(8): >>25058922 #>>25058980 #>>25058990 #>>25059055 #>>25059382 #>>25061149 #>>25061376 #>>25067968 #
verisimilidude ◴[] No.25061149[source]
It's a nice idea in theory, but I don't see Apple putting in the effort to make this fruitful.

For example, we just saw an article rise to the top of HN in the last couple days about the pathetic state of Apple's developer documentation. Their focus seems to be less providing integrations into their hardware, and more providing integrations into their services. Meanwhile, developers increasingly distrust Apple because of bad policies and press around App Store review. It's a mess.

I agree that Apple could and should help app developers use this cool new hardware. I'm sure there are good people at Apple who're trying. But the company as a whole seems to be chasing other squirrels.

replies(4): >>25062114 #>>25062633 #>>25064115 #>>25065146 #
1. jonas21 ◴[] No.25062633[source]
There are some areas where Apple is prioritizing getting developers on board with their hardware, and the neural engine seems like one of them.

Over the past couple of years, coremltools [1], which is used to convert models from Tensorflow and other frameworks to run on Apple hardware (including the neural engine when available), has gone from a total joke to being quite good.

I had to get a Keras model running on iOS a few months ago, and I was expecting to spend days tracking down obscure errors and writing lots of custom code to get the conversion to work -- but instead it was literally 3 lines of code, and it worked on the first try.

[1] https://github.com/apple/coremltools

replies(1): >>25063455 #
2. 411111111111111 ◴[] No.25063455[source]
You're earning money with a model deployed on an iOS device? Now that's an achievement. It's even rare to actually get productive models in the first place but then doubling down on less powerful hardware then you could get with aws is just mind-blowing to me in an production context
replies(2): >>25063769 #>>25068652 #
3. dclusin ◴[] No.25063769[source]
It's the age old thin client vs. fat client debate repeating itself again. It seems like as the chips & tools get more mature we'll see more and more model deployments on customer hardware. Transmitting gigabytes of sensor/input data to a nearby data center for real time result just isn't feasible for most applications.

There's probably lots of novel applications of AI/ML that remain to be built because of this limitation. Probably also good fodder for backing your way into a startup idea as a technologist.

4. shrimpx ◴[] No.25068652[source]
Suppose you want to do object detection on a phone’s live camera stream. Running your model on aws is probably infeasible, because you’re killing the users data plan while streaming frames into your remote model, and network latency kills the user experience.

On-device detection (“edge ai”) is gaining steam. Apple recently purchased a company called xnor.ai which specialized in optimizing models for low power conditions.