←back to thread

255 points tbruckner | 8 comments | | HN request time: 0.619s | source | bottom
1. m3kw9 ◴[] No.37421727[source]
OpenAIs moat will soon largely be UX. Anyone can do plugins, code etc but when operating by everyday users the best UX wins after LLM becomes commodified. Just look at stand alone digital cameras vs mobile phone cams from Apple.
replies(3): >>37422623 #>>37423170 #>>37424079 #
2. smoldesu ◴[] No.37422623[source]
> but when operating by everyday users the best UX wins

Is that not why OpenAI is ahead right now? For free, you can have access to powerful AI on anything with a web browser. You don't need to wait for your SSD to load the model, page it into memory and swap your preexisting processes like it would on a local machine. You don't need to worry about the local battery drain, heat, memory constraints or hardware limitations. If you can read Hacker News, you can use AI.

Given the current performance of local models, I bet OpenAI is feeling pretty comfortable from where they're standing. Most people don't have mobile devices with enough RAM to load a 13b, 4-bit Llama quantization. Running a 180B model (much less a GPT-4 scale model) on consumer hardware is financially infeasible. Running it at-scale, in the cloud is pennies on the dollar.

I'm not fond of OpenAI in the slightest, but if you've followed the state of local models recently it's clear why they keep coming out ahead.

replies(1): >>37422761 #
3. anurag6892 ◴[] No.37422761[source]
this advantage is not specific to OpenAI right? Any big cloud provider like Amazon/Google can host these open LLM models.
replies(2): >>37422795 #>>37424905 #
4. smoldesu ◴[] No.37422795{3}[source]
It's not exclusive, no. At OpenAI's scale though, they can afford to purchase their own hardware like a big cloud provider can. It's likely that OpenAI was running Nvidia's DGX servers in production before AWS and GCP even got their unit quotes.
5. ZoomerCretin ◴[] No.37423170[source]
GPT4 is still leagues ahead of the competition. Open source LLMs will be used more widely, but for the most demanding tasks, there is no alternative for GPT4.
replies(1): >>37506701 #
6. xpe ◴[] No.37424079[source]
I buy this general argument, at least to extent that 'good enough' LLMs get commodified.

What are some of key aspects about scenarios where this commodification happens? Where it doesn't?

Speaking descriptively (not normatively), I see a lot of possibilities about how things will unfold hinging on (a) licensing, (b) desire for recent data, (c) desire for private data, (d) regulation.

7. nerbert ◴[] No.37424905{3}[source]
OpenAI's got the first mover advantage. It's everything if you don't fuck up.
8. eurekin ◴[] No.37506701[source]
Anecdata confirmation: I've been toying around with LLMs for simple fun stuff, but when it comes to real work, GPT-4 delivers in spades.

I have cut many hours of debugging thanks to it. I could find issues easily, on-call in short conversation, when previously that was reserved as post mortem task.

Even reading documentation is nothing like before: once, I was looking for a single command to upload and presign a object in S3. SDK has tens of methods, which require careful scanning, if they do what I want. Going through documentation thoroughly would've taken me hours. GPT-4 simply found, no, there's no operation for that immediately.