←back to thread

1901 points l2silver | 2 comments | | HN request time: 0.444s | source

Maybe you've created your own AR program for wearables that shows the definition of a word when you highlight it IRL, or you've built a personal calendar app for your family to display on a monitor in the kitchen. Whatever it is, I'd love to hear it.
Show context
cptaj ◴[] No.35740766[source]
This is exactly the type of shit I see benevolent AGI doing for us
replies(5): >>35740864 #>>35742060 #>>35742293 #>>35742410 #>>35744076 #
hammyhavoc ◴[] No.35740864[source]
I'm not convinced the costs would make that a viable use of resources versus just making an appropriate product, or using something that already exists like Spotify playlists. Even an LLM is expensive to keep running.
replies(3): >>35741414 #>>35741485 #>>35742546 #
1. smirth ◴[] No.35741414[source]
Why would you keep one running. You don’t need to run an LLM except perhaps to rotate the playlists. First time it might help setup the code. Even making requests can be done by simply queries. Pennies at most for a few thousand tokens every now and then.
replies(1): >>35741624 #
2. hammyhavoc ◴[] No.35741624[source]
Why would you need one whatsoever? If someone has already done the work as in the op, why not just cut out the hypothetical "benevolent AGI" and utilize the existing source code?

You're invoking LLMs, but "benevolent AGI" was what got invoked originally. Don't conflate a hypothetical AGI with an existing LLM. Anything of the scale required to create a hypothetical AGI is going to be expensive. Period.

Is grandma really going to use a hypothetical AGI any better than she's able to use Spotify? Come on.