←back to thread

645 points bradgessler | 1 comments | | HN request time: 0s | source
Show context
ay ◴[] No.44010366[source]
Very strange. Either the author uses some magic AI, or I am holding it wrong. I used LLMs since a couple of years, as a nice tool.

Besides that:

I have tried using LLMs to create cartoon pictures. The first impression is “wow”; but after a bunch of pictures you see the evidently repetitive “style”.

Using LLMs to write poetry results is also quite cool at first, but after a few iterations you see the evidently repetitive “style”, which is bland and lacks depth and substance.

Using LLMs to render music is amazing at first, but after a while you can see the evidently repetitive style - for both rhymes and music.

Using NotebookLM to create podcasts at first feels amazing, about to open the gates of knowledge; but then you notice that the flow is very repetitive, and that the “hosts” don’t really show enough understanding to make it interesting. Interrupting them with questions somewhat dilutes this impression, though, so jury is out here.

Again, with generating the texts, they get a distant metallic taste that is hard to ignore after a while.

The search function is okay, but with a little bit of nudge one can influence the resulting answer by a lot, so I wary if blindly taking the “advice”, and always recheck it, and try to make two competing where I would influence LLM into taking the competing viewpoints and learn from both.

Using the AI to generate code - simple things are ok, but for non-trivial items it introduces pretty subtle bugs, which require me to ensure I understand every line. This bit is the most fun - the bug quest is actually entertaining, as it is often the same bugs humans would make.

So, I don’t see the same picture, but something close to the opposite of what the author sees.

Having an easy outlet to bounce the quick ideas off and a source of relatively unbiased feedback brought me back to the fun of writing; so literally it’s the opposite effect compared to the article author…

replies(3): >>44010447 #>>44011377 #>>44012720 #
jstummbillig ◴[] No.44010447[source]
Maybe you are not that great at using the most current LLMs or you don't want to be? I find that increasingly to be the most likely answer, whenever somebody makes sweeping claims about the impotence of LLMs.

I get more use out of them every single day and certainly with every model release (mostly for generating absolutely not trivial code) and it's not subtle.

replies(3): >>44010648 #>>44010680 #>>44011834 #
guyfhuo ◴[] No.44011834[source]
> Maybe you are not that great at using the most current LLMs or you don't want to be?

I’m tired of this argument. I’ll even grant you: both sides of it.

It seems as though we prepared our selves to respond to llms in this manner with people memeing, or simply recognizing, that there was a “way” to ask questions to get better results early on when ranked search broadened the appeal of search engines.

The reality is that both you and the op are talking about the opinion of the thing, but leaving out the thing itself.

You could say “git gud”, but what if you showed op what “gud” output to you was, and they recognized it as the same sort of output that they were saying was repetitive?

It’s ambiguity based on opinion.

I fear so many are taking part each other.

Perhaps linking to example prompts and outputs that can be directly discussed is the only way to give specificity to the ambiguous language.

replies(1): >>44013504 #
1. jstummbillig ◴[] No.44013504{3}[source]
The problem is that, knowing the public internet, what would absolutely happen, is people arguing the ways in which

a) the code is bad b) the problem is beneath what they consider non-trivial

The way that OP structured the response, I frankly got a similar impression (although the follow up feels much different). I just don't see the point in engaging in that here, but I take your criticism: Why engage at all. I should probably not, then.