Most active commenters
  • hammyhavoc(4)

←back to thread

1901 points l2silver | 11 comments | | HN request time: 0.991s | source | bottom

Maybe you've created your own AR program for wearables that shows the definition of a word when you highlight it IRL, or you've built a personal calendar app for your family to display on a monitor in the kitchen. Whatever it is, I'd love to hear it.
Show context
cptaj ◴[] No.35740766[source]
This is exactly the type of shit I see benevolent AGI doing for us
replies(5): >>35740864 #>>35742060 #>>35742293 #>>35742410 #>>35744076 #
1. hammyhavoc ◴[] No.35740864[source]
I'm not convinced the costs would make that a viable use of resources versus just making an appropriate product, or using something that already exists like Spotify playlists. Even an LLM is expensive to keep running.
replies(3): >>35741414 #>>35741485 #>>35742546 #
2. smirth ◴[] No.35741414[source]
Why would you keep one running. You don’t need to run an LLM except perhaps to rotate the playlists. First time it might help setup the code. Even making requests can be done by simply queries. Pennies at most for a few thousand tokens every now and then.
replies(1): >>35741624 #
3. b33j0r ◴[] No.35741485[source]
Hear me out. We can bootstrap these costs by mining crypto! We’ll use the waste heat from that to power the AI’s, with 110% carnot efficiency.

Unfortunately, the physics of this work out so that every playlist generated is composed of Nickelback and “Sweet Caroline” covers.

replies(1): >>35741985 #
4. hammyhavoc ◴[] No.35741624[source]
Why would you need one whatsoever? If someone has already done the work as in the op, why not just cut out the hypothetical "benevolent AGI" and utilize the existing source code?

You're invoking LLMs, but "benevolent AGI" was what got invoked originally. Don't conflate a hypothetical AGI with an existing LLM. Anything of the scale required to create a hypothetical AGI is going to be expensive. Period.

Is grandma really going to use a hypothetical AGI any better than she's able to use Spotify? Come on.

5. birdyrooster ◴[] No.35741985[source]
Nickleback is a fundamental concept of physics, you will always see echoes of it in any work you do. One way to be sure is to snap a picture, look at this photograph and every time it will make you laugh.
replies(1): >>35743102 #
6. bbarnett ◴[] No.35742546[source]
Even an LLM is expensive to keep running.

In a decade, it'll cost pennies a year.

replies(3): >>35742777 #>>35743277 #>>35743561 #
7. DougMerritt ◴[] No.35742777[source]
I definitely agree, specifically because of software improvements (not because Moore's Law will make it that cheap).

Conversely, it wouldn't make a lot of sense to predict that it will always be as expensive as it is today.

Well, I guess "pennies" is a radical prediction. Cheap, anyway.

8. b33j0r ◴[] No.35743102{3}[source]
I have often wondered if Chad is the actual source of dark matter:

1. Never made it as a poor man

2. Never made it as a blind man stealing

3. This is how I remind you, that I’m really MOND.

Sometimes the answer is just staring you in the face. A Canadian face, that should have been from San Antonio, TX.

9. hammyhavoc ◴[] No.35743277[source]
But we aren't there yet, and grandma is going to be dead in a decade, and the source code for the radio playlist gimmick already exists, as does Spotify.
10. jacobr1 ◴[] No.35743561[source]
Isn't it already relatively cheap to run? Training is costly, but there are examples of running LLaMA on your laptop. It doesn't seem like it will take decades to commoditize this stuff ... it is just the cutting edge might keep moving ahead.
replies(1): >>35744090 #
11. hammyhavoc ◴[] No.35744090{3}[source]
Not relative to a permanent and much more simplistic solution that already exists in the form of the source code for the original radio project mentioned in the op.

I'll give you an example: fabricating an ASIC is expensive. Using FPGAs is cheaper if the potential sales are low, but they're less performant.

If a hypothetical AGI a decade from now can do the radio gimmick, but it incurs an ongoing cost, but it's going to have wide appeal, it makes more sense to make a simple utility.

Better yet, the simple utility already exists and doesn't need a hypothetical "benevolent AGI". It doesn't even need an LLM. It's here today.

This entire sub-thread went off at a tangent of trying to shoehorn AI into somewhere it has no place being, just like the fetishizing of blockchain and attempting to shoehorn it into everywhere a database would be cheaper, more flexible and more performant.

A hypothetical "benevolent AGI" is going to be incredibly larger in scale than an LLM, thus much more expensive. You won't be running one on a laptop. We may not even have enough compute globally for a hypothetical "benevolent AGI".