Most active commenters
  • echelon(3)

←back to thread

396 points doener | 11 comments | | HN request time: 0.001s | source | bottom
1. nine_k ◴[] No.46175098[source]
It's amazing how much knowledge about the world fits into 16 GiB of the distilled model.
replies(2): >>46175113 #>>46175237 #
2. echelon ◴[] No.46175113[source]
This is early days, too. We're probably going to get better at this across more domains.

Local AI will eventually be booming. It'll be more configurable, adaptable, hackable. "Free". And private.

Crude APIs can only get you so far.

I'm in favor of intelligent models like Nano Banana over ComfyUI messes (the future is the model, not the node graph).

I still think we need the ability to inject control layers and have full access to the model, because we lose too much utility by not having it.

I think we'll eventually get Nano Banana Pro smarts slimmed down and running on a local machine.

replies(2): >>46175649 #>>46176111 #
3. ◴[] No.46175237[source]
4. bobsmooth ◴[] No.46175649[source]
>Local AI will eventually be booming.

With how expensive RAM currently is, I doubt it.

replies(3): >>46178300 #>>46178600 #>>46182287 #
5. echelon ◴[] No.46177236{3}[source]
Is this a joke?

Image and video models are some of the most useful tools of the last few decades.

replies(1): >>46179453 #
6. api ◴[] No.46178300{3}[source]
I’m old enough to remember many memory price spikes.
replies(2): >>46179231 #>>46181303 #
7. echelon ◴[] No.46178600{3}[source]
It's temporary. Sam Altman booked all the supply for a year. Give it time to unwind.
8. SV_BubbleTime ◴[] No.46179231{4}[source]
I remember saving up for my first 128MB stick and the next week it was like triple in price.
9. vachina ◴[] No.46179453{4}[source]
Is this a joke?
10. lomase ◴[] No.46181303{4}[source]
Do you also remember when eveybody was waiting for cryto to cool off to buy a GPU?
11. gpm ◴[] No.46182287{3}[source]
That's a short term effect. Long term Wright's law will kick in and ram will end up cheaper as a result of all the demand. It's not like there's a fundamental bottleneck on how much ram we could produce we're running into, just how much we're currently set up to produce.