←back to thread

263 points itzlambda | 2 comments | | HN request time: 0s | source
Show context
lsy ◴[] No.44608975[source]
If you have a decent understanding of how LLMs work (you put in basically every piece of text you can find, get a statistical machine that models text really well, then use contractors to train it to model text in conversational form), then you probably don't need to consume a big diet of ongoing output from PR people, bloggers, thought leaders, and internet rationalists. That seems likely to get you going down some millenarian path that's not helpful.

Despite the feeling that it's a fast-moving field, most of the differences in actual models over the last years are in degree and not kind, and the majority of ongoing work is in tooling and integrations, which you can probably keep up with as it seems useful for your work. Remembering that it's a model of text and is ungrounded goes a long way to discerning what kinds of work it's useful for (where verification of output is either straightforward or unnecessary), and what kinds of work it's not useful for.

replies(12): >>44609211 #>>44609259 #>>44609322 #>>44609630 #>>44609864 #>>44609882 #>>44610429 #>>44611712 #>>44611764 #>>44612491 #>>44613946 #>>44614339 #
qsort ◴[] No.44609259[source]
I agree, but with the caveat that it's probably a bad time to fall asleep at the wheel. I'm very much a "nothing ever happens" kind of guy, but I see a lot of people who aren't taking the time to actually understand how LLMs work, and I think that's a huge mistake.

Last week I showed some colleagues how to do some basic things with Claude Code and they were like "wow, I didn't even know this existed". Bro, what are you even doing.

There is definitely a lot of hype and the lunatics on Linkedin are having a blast, but to put it mildly I don't think it's a bad investment to experiment a bit with what's possible with the SOTA.

replies(3): >>44609316 #>>44609385 #>>44610477 #
chamomeal ◴[] No.44609316[source]
I mean I didn’t find out about Claude code until like a week ago and it hasn’t materially changed my work, or even how I interact with LLMs. I still basically copy paste into claude on web most of the time.

It is ridiculously cool, but I think anybody developer who is out of the loop could easily get back into the loop at any moment without having to stay caught up most of the time.

replies(1): >>44609382 #
1. qsort ◴[] No.44609382[source]
I'm not talking about tools in particular, I completely agree that they're basically fungible, and for "serious" stuff it's probably still better to use the web interface directly as you have more control over the context.

The problem I see is that a lot of people are grossly misaligned with the state of the art, and it does take a bit of experimentation to understand how to work with an LLM. Even basic stuff like how to work with context isn't immediately obvious.

replies(1): >>44609627 #
2. Fraterkes ◴[] No.44609627[source]
I don’t think you’re wrong, but if it takes someone a month (at most) to get up to speed with these tools, I don’t think thats much of an argument for closely keeping up with them (until you need to know them to keep your job or smt) especially because everything is changing every few months. There is arguably no technology that needs you to “keep up” with it less