←back to thread

174 points Philpax | 1 comments | | HN request time: 0.221s | source
Show context
andrewstuart ◴[] No.43719877[source]
LLMs are basically a library that can talk.

That’s not artificial intelligence.

replies(3): >>43719994 #>>43720037 #>>43722517 #
52-6F-62 ◴[] No.43719994[source]
Grammar engines. Or value matrix engines.

Everytime I try to work with them I lose more time than I gain. Net loss every time. Immensely frustrating. If i focus it on a small subtask I can gain some time (rough draft of a test). Anything more advanced and its a monumental waste of time.

They are not even good librarians. They fail miserably at cross referencing and contextualizing without constant leading.

replies(2): >>43720038 #>>43720258 #
aaronbaugher ◴[] No.43720258[source]
I've only really been experimenting with them for a few days, but I'm kind of torn on it. On the one hand, I can see a lot of things it could be useful for, like indexing all the cluttered files I've saved over the years and looking things up for me faster than I could find|grep. Heck, yesterday I asked one a relationship question, and it gave me pretty good advice. Nothing I couldn't have gotten out of a thousand books and magazines, but it was a lot faster and more focused than doing that.

On the other hand, the prompt/answer interface really limits what you can do with it. I can't just say, like I could with a human assistant, "Here's my calendar. Send me a summary of my appointments each morning, and when I tell you about a new one, record it in here." I can script something like that, and even have the LLM help me write the scripts, but since I can already write scripts, that's only a speed-up at best, not anything revolutionary.

I asked Grok what benefit there would be in having a script fetch the weather forecast data, pass it to Grok in a prompt, and then send the output to my phone. The answer was basically, "So I can say it nicer and remind you to take an umbrella if it sounds rainy." Again, that's kind of neat, but not a big deal.

Maybe I just need to experiment more to see a big advance I can make with it, but right now it's still at the "cool toy" stage.

replies(1): >>43735759 #
1. namaria ◴[] No.43735759[source]
Beware of Gell-Mann amnesia, validation bias and plain nonsense written into summaries LLMs do.

I have fed ChatGPT a pdf file with activity codes from a local tax authority and asked how I could classify some things I was interested in doing. It invented codes that didn't exist.

I would be very very careful about asking any LLM to organize data for me and trusting the output.

As for "life advice" type of thing, they are very sycophantic. I wouldn't go to a friend who always agrees with me enthusiastically for life advice. That sort of yes man behavior is quite toxic.