Is that not why OpenAI is ahead right now? For free, you can have access to powerful AI on anything with a web browser. You don't need to wait for your SSD to load the model, page it into memory and swap your preexisting processes like it would on a local machine. You don't need to worry about the local battery drain, heat, memory constraints or hardware limitations. If you can read Hacker News, you can use AI.
Given the current performance of local models, I bet OpenAI is feeling pretty comfortable from where they're standing. Most people don't have mobile devices with enough RAM to load a 13b, 4-bit Llama quantization. Running a 180B model (much less a GPT-4 scale model) on consumer hardware is financially infeasible. Running it at-scale, in the cloud is pennies on the dollar.
I'm not fond of OpenAI in the slightest, but if you've followed the state of local models recently it's clear why they keep coming out ahead.
What are some of key aspects about scenarios where this commodification happens? Where it doesn't?
Speaking descriptively (not normatively), I see a lot of possibilities about how things will unfold hinging on (a) licensing, (b) desire for recent data, (c) desire for private data, (d) regulation.
I have cut many hours of debugging thanks to it. I could find issues easily, on-call in short conversation, when previously that was reserved as post mortem task.
Even reading documentation is nothing like before: once, I was looking for a single command to upload and presign a object in S3. SDK has tens of methods, which require careful scanning, if they do what I want. Going through documentation thoroughly would've taken me hours. GPT-4 simply found, no, there's no operation for that immediately.