Most active commenters
  • ay(4)
  • jstummbillig(3)

←back to thread

645 points bradgessler | 14 comments | | HN request time: 1.028s | source | bottom
1. ay ◴[] No.44010366[source]
Very strange. Either the author uses some magic AI, or I am holding it wrong. I used LLMs since a couple of years, as a nice tool.

Besides that:

I have tried using LLMs to create cartoon pictures. The first impression is “wow”; but after a bunch of pictures you see the evidently repetitive “style”.

Using LLMs to write poetry results is also quite cool at first, but after a few iterations you see the evidently repetitive “style”, which is bland and lacks depth and substance.

Using LLMs to render music is amazing at first, but after a while you can see the evidently repetitive style - for both rhymes and music.

Using NotebookLM to create podcasts at first feels amazing, about to open the gates of knowledge; but then you notice that the flow is very repetitive, and that the “hosts” don’t really show enough understanding to make it interesting. Interrupting them with questions somewhat dilutes this impression, though, so jury is out here.

Again, with generating the texts, they get a distant metallic taste that is hard to ignore after a while.

The search function is okay, but with a little bit of nudge one can influence the resulting answer by a lot, so I wary if blindly taking the “advice”, and always recheck it, and try to make two competing where I would influence LLM into taking the competing viewpoints and learn from both.

Using the AI to generate code - simple things are ok, but for non-trivial items it introduces pretty subtle bugs, which require me to ensure I understand every line. This bit is the most fun - the bug quest is actually entertaining, as it is often the same bugs humans would make.

So, I don’t see the same picture, but something close to the opposite of what the author sees.

Having an easy outlet to bounce the quick ideas off and a source of relatively unbiased feedback brought me back to the fun of writing; so literally it’s the opposite effect compared to the article author…

replies(3): >>44010447 #>>44011377 #>>44012720 #
2. jstummbillig ◴[] No.44010447[source]
Maybe you are not that great at using the most current LLMs or you don't want to be? I find that increasingly to be the most likely answer, whenever somebody makes sweeping claims about the impotence of LLMs.

I get more use out of them every single day and certainly with every model release (mostly for generating absolutely not trivial code) and it's not subtle.

replies(3): >>44010648 #>>44010680 #>>44011834 #
3. abathologist ◴[] No.44010648[source]
What kind of problems are you solving day-to-day where the LLMs are doing heavy lifting?
replies(1): >>44013326 #
4. ay ◴[] No.44010680[source]
Could totally be the case, that, as I wrote in the very first sentence, I am holding it wrong.

But I am not saying LLMs are impotent - the other week Claude happily churned me ~3500 lines of C code that allowed to implement a prototype capture facility for network packets with flexible filters and saving the contents into pcapng files. I had to fix a couple of bugs that it made, but overall it was certainly at least 5x-10x productivity improvement compared to me typing these lines of code by hand. I don’t dispute that it’s a pretty useful tool in coding, or as a thinking assistant (see the last paragraph of my comment).

What I challenged is the submissive self deprecating adoration across the entire spectrum.

replies(1): >>44013632 #
5. fennecbutt ◴[] No.44011377[source]
>evidently repetitive “style”.

Use LORAs, write better prompts. I've done a lot of diffusion and especially in 2025 it's not difficult to get out something quite good.

Repetitive style is funny, because that's what human artists do for the most part. I'm a furry, I look at a lot of art and individual styles are a well established fact.

replies(1): >>44012236 #
6. guyfhuo ◴[] No.44011834[source]
> Maybe you are not that great at using the most current LLMs or you don't want to be?

I’m tired of this argument. I’ll even grant you: both sides of it.

It seems as though we prepared our selves to respond to llms in this manner with people memeing, or simply recognizing, that there was a “way” to ask questions to get better results early on when ranked search broadened the appeal of search engines.

The reality is that both you and the op are talking about the opinion of the thing, but leaving out the thing itself.

You could say “git gud”, but what if you showed op what “gud” output to you was, and they recognized it as the same sort of output that they were saying was repetitive?

It’s ambiguity based on opinion.

I fear so many are taking part each other.

Perhaps linking to example prompts and outputs that can be directly discussed is the only way to give specificity to the ambiguous language.

replies(1): >>44013504 #
7. socalgal2 ◴[] No.44012236[source]
Yes, most human artists have a repetitive style. In fact that's often how you recongize who made a piece of art.
replies(1): >>44012439 #
8. suddenlybananas ◴[] No.44012439{3}[source]
Yeah but the difference is that style is sometimes actually interesting and not completely banal.
9. sabakhoj ◴[] No.44012720[source]
We need to distinctly think about what tasks are actually suitable for LLMs. Used poorly, they'll gut our abilities to think thoughtfully. The push, IMO, should be for using them for verification and clarification, but not for replacements in understanding and creativity.

Example: Do the problem sets yourself. If you're getting questions wrong, dig deeper with an AI assistant to find gaps in your knowledge. Do NOT let the AI do the problem sets first.

I think it was similar to how we used calculators in school in the 2010s at least. We learned the principles behind the formulae and how to do them manually, before introducing the calculators to abstract the usage of the tools.

I've let that core principle shape some of how we're designing our paper-reading assistant, but still thinking through the UX patterns -- https://openpaper.ai/blog/manifesto.

replies(1): >>44017563 #
10. Madmallard ◴[] No.44013326{3}[source]
Agree

They can't do anything elaborate or interesting for me beyond literal tiny pet project proof of concepts. They could potentially help me uncover a bug, explain some code, or implement a small feature.

As soon as the complexity of the feature goes up either in its side-effects, dependencies, or the customization of the details of the feature, they are quite unhelpful. I doubt even one senior engineer at a large company is using LLMs for major feature updates in codebases that have a lot of moving parts and significant complexity and many LOC.

11. jstummbillig ◴[] No.44013504{3}[source]
The problem is that, knowing the public internet, what would absolutely happen, is people arguing the ways in which

a) the code is bad b) the problem is beneath what they consider non-trivial

The way that OP structured the response, I frankly got a similar impression (although the follow up feels much different). I just don't see the point in engaging in that here, but I take your criticism: Why engage at all. I should probably not, then.

12. jstummbillig ◴[] No.44013632{3}[source]
Reading this I am not sure I got the gist of your previous post. Re-reading the previous post, I still don't see how the two posts gel. I submit we might just have very different interpretations of the same observations. For example I have a hard imagining the described 3500 LOC program as 'simple'. Limited in scope, sure. But if you got it done 5-10x faster, then it can't be that simple?

Anyway: I found the writers perspective on this whole subject to be interesting, and agree on the merits — I definitely think they are correct on their analysis and outlook, and here the two of us apparently disagree – but I don't share their concluding feelings.

But I can see, how they got there.

replies(1): >>44017533 #
13. ay ◴[] No.44017533{4}[source]
I suspect indeed we have the difference in terminology.

I put distinction between “simple” and “easy”.

Digging a pit of 1m * 1m * 1m is simple - just displace a cubic meter of soil; but it is not easy as it’s a lot of monotonous physical work.

A small excavator makes the task easy but arguably less simple since now you need to also know how to operate the excavator.

LLMs make a lot of coding tasks “easy” by being this small excavator. But they do not always make them “simple” - more often than not, they make bugs, for fixing which you need to understand subject matter, so they don’t eliminate the need to learn it.

Does this make sense ?

14. ay ◴[] No.44017563[source]
Agreed, and the analogy with calculators is very apt.