←back to thread

646 points bradgessler | 2 comments | | HN request time: 0.001s | source
Show context
ay ◴[] No.44010366[source]
Very strange. Either the author uses some magic AI, or I am holding it wrong. I used LLMs since a couple of years, as a nice tool.

Besides that:

I have tried using LLMs to create cartoon pictures. The first impression is “wow”; but after a bunch of pictures you see the evidently repetitive “style”.

Using LLMs to write poetry results is also quite cool at first, but after a few iterations you see the evidently repetitive “style”, which is bland and lacks depth and substance.

Using LLMs to render music is amazing at first, but after a while you can see the evidently repetitive style - for both rhymes and music.

Using NotebookLM to create podcasts at first feels amazing, about to open the gates of knowledge; but then you notice that the flow is very repetitive, and that the “hosts” don’t really show enough understanding to make it interesting. Interrupting them with questions somewhat dilutes this impression, though, so jury is out here.

Again, with generating the texts, they get a distant metallic taste that is hard to ignore after a while.

The search function is okay, but with a little bit of nudge one can influence the resulting answer by a lot, so I wary if blindly taking the “advice”, and always recheck it, and try to make two competing where I would influence LLM into taking the competing viewpoints and learn from both.

Using the AI to generate code - simple things are ok, but for non-trivial items it introduces pretty subtle bugs, which require me to ensure I understand every line. This bit is the most fun - the bug quest is actually entertaining, as it is often the same bugs humans would make.

So, I don’t see the same picture, but something close to the opposite of what the author sees.

Having an easy outlet to bounce the quick ideas off and a source of relatively unbiased feedback brought me back to the fun of writing; so literally it’s the opposite effect compared to the article author…

replies(3): >>44010447 #>>44011377 #>>44012720 #
jstummbillig ◴[] No.44010447[source]
Maybe you are not that great at using the most current LLMs or you don't want to be? I find that increasingly to be the most likely answer, whenever somebody makes sweeping claims about the impotence of LLMs.

I get more use out of them every single day and certainly with every model release (mostly for generating absolutely not trivial code) and it's not subtle.

replies(3): >>44010648 #>>44010680 #>>44011834 #
ay ◴[] No.44010680[source]
Could totally be the case, that, as I wrote in the very first sentence, I am holding it wrong.

But I am not saying LLMs are impotent - the other week Claude happily churned me ~3500 lines of C code that allowed to implement a prototype capture facility for network packets with flexible filters and saving the contents into pcapng files. I had to fix a couple of bugs that it made, but overall it was certainly at least 5x-10x productivity improvement compared to me typing these lines of code by hand. I don’t dispute that it’s a pretty useful tool in coding, or as a thinking assistant (see the last paragraph of my comment).

What I challenged is the submissive self deprecating adoration across the entire spectrum.

replies(1): >>44013632 #
1. jstummbillig ◴[] No.44013632[source]
Reading this I am not sure I got the gist of your previous post. Re-reading the previous post, I still don't see how the two posts gel. I submit we might just have very different interpretations of the same observations. For example I have a hard imagining the described 3500 LOC program as 'simple'. Limited in scope, sure. But if you got it done 5-10x faster, then it can't be that simple?

Anyway: I found the writers perspective on this whole subject to be interesting, and agree on the merits — I definitely think they are correct on their analysis and outlook, and here the two of us apparently disagree – but I don't share their concluding feelings.

But I can see, how they got there.

replies(1): >>44017533 #
2. ay ◴[] No.44017533[source]
I suspect indeed we have the difference in terminology.

I put distinction between “simple” and “easy”.

Digging a pit of 1m * 1m * 1m is simple - just displace a cubic meter of soil; but it is not easy as it’s a lot of monotonous physical work.

A small excavator makes the task easy but arguably less simple since now you need to also know how to operate the excavator.

LLMs make a lot of coding tasks “easy” by being this small excavator. But they do not always make them “simple” - more often than not, they make bugs, for fixing which you need to understand subject matter, so they don’t eliminate the need to learn it.

Does this make sense ?