←back to thread

174 points Philpax | 6 comments | | HN request time: 1.008s | source | bottom
Show context
andrewstuart ◴[] No.43719877[source]
LLMs are basically a library that can talk.

That’s not artificial intelligence.

replies(3): >>43719994 #>>43720037 #>>43722517 #
1. 52-6F-62 ◴[] No.43719994[source]
Grammar engines. Or value matrix engines.

Everytime I try to work with them I lose more time than I gain. Net loss every time. Immensely frustrating. If i focus it on a small subtask I can gain some time (rough draft of a test). Anything more advanced and its a monumental waste of time.

They are not even good librarians. They fail miserably at cross referencing and contextualizing without constant leading.

replies(2): >>43720038 #>>43720258 #
2. andrewstuart ◴[] No.43720038[source]
I feel the opposite.

LLMs are unbelievably useful for me - never have I had a tool more powerful to assist my brain work. I useLLMs for work and play constantly every day.

It pretends to sound like a person and can mimic speech and write and is all around perhaps the greatest wonder created by humanity.

It’s still not artificial intelligence though, it’s a talking library.

replies(1): >>43720148 #
3. 52-6F-62 ◴[] No.43720148[source]
Fair. For engineering work they have been a terrible drain on me save for the most minor autocomplete. Its recommendations are often deeply flawed or almost totally hallucinated no matter the model. Maybe I am a better software engineer than a “prompt engineer”.

Ive tried to use them as a research assistant in a history project and they have been also quite bad in that respect because of the immense naivety in its approaches.

I couldn’t call them a librarian because librarians are studied and trained in cross referencing material.

They have helped me in some searches but not better than a search engine at a monumentally higher investment cost to the industry.

Then again, I am also speaking as someone who doesn’t like to offload all of my communications to those things. Use it or lose it, eh

replies(1): >>43720263 #
4. aaronbaugher ◴[] No.43720258[source]
I've only really been experimenting with them for a few days, but I'm kind of torn on it. On the one hand, I can see a lot of things it could be useful for, like indexing all the cluttered files I've saved over the years and looking things up for me faster than I could find|grep. Heck, yesterday I asked one a relationship question, and it gave me pretty good advice. Nothing I couldn't have gotten out of a thousand books and magazines, but it was a lot faster and more focused than doing that.

On the other hand, the prompt/answer interface really limits what you can do with it. I can't just say, like I could with a human assistant, "Here's my calendar. Send me a summary of my appointments each morning, and when I tell you about a new one, record it in here." I can script something like that, and even have the LLM help me write the scripts, but since I can already write scripts, that's only a speed-up at best, not anything revolutionary.

I asked Grok what benefit there would be in having a script fetch the weather forecast data, pass it to Grok in a prompt, and then send the output to my phone. The answer was basically, "So I can say it nicer and remind you to take an umbrella if it sounds rainy." Again, that's kind of neat, but not a big deal.

Maybe I just need to experiment more to see a big advance I can make with it, but right now it's still at the "cool toy" stage.

replies(1): >>43735759 #
5. andrewstuart ◴[] No.43720263{3}[source]
I’m curious you’re a developer who finds no value in LLMs?

It’s weird to me that there’s such a giant gap with my experience of it bein a minimum 10x multiplier.

6. namaria ◴[] No.43735759[source]
Beware of Gell-Mann amnesia, validation bias and plain nonsense written into summaries LLMs do.

I have fed ChatGPT a pdf file with activity codes from a local tax authority and asked how I could classify some things I was interested in doing. It invented codes that didn't exist.

I would be very very careful about asking any LLM to organize data for me and trusting the output.

As for "life advice" type of thing, they are very sycophantic. I wouldn't go to a friend who always agrees with me enthusiastically for life advice. That sort of yes man behavior is quite toxic.