←back to thread

646 points bradgessler | 2 comments | | HN request time: 0.442s | source
Show context
ay ◴[] No.44010366[source]
Very strange. Either the author uses some magic AI, or I am holding it wrong. I used LLMs since a couple of years, as a nice tool.

Besides that:

I have tried using LLMs to create cartoon pictures. The first impression is “wow”; but after a bunch of pictures you see the evidently repetitive “style”.

Using LLMs to write poetry results is also quite cool at first, but after a few iterations you see the evidently repetitive “style”, which is bland and lacks depth and substance.

Using LLMs to render music is amazing at first, but after a while you can see the evidently repetitive style - for both rhymes and music.

Using NotebookLM to create podcasts at first feels amazing, about to open the gates of knowledge; but then you notice that the flow is very repetitive, and that the “hosts” don’t really show enough understanding to make it interesting. Interrupting them with questions somewhat dilutes this impression, though, so jury is out here.

Again, with generating the texts, they get a distant metallic taste that is hard to ignore after a while.

The search function is okay, but with a little bit of nudge one can influence the resulting answer by a lot, so I wary if blindly taking the “advice”, and always recheck it, and try to make two competing where I would influence LLM into taking the competing viewpoints and learn from both.

Using the AI to generate code - simple things are ok, but for non-trivial items it introduces pretty subtle bugs, which require me to ensure I understand every line. This bit is the most fun - the bug quest is actually entertaining, as it is often the same bugs humans would make.

So, I don’t see the same picture, but something close to the opposite of what the author sees.

Having an easy outlet to bounce the quick ideas off and a source of relatively unbiased feedback brought me back to the fun of writing; so literally it’s the opposite effect compared to the article author…

replies(3): >>44010447 #>>44011377 #>>44012720 #
1. sabakhoj ◴[] No.44012720[source]
We need to distinctly think about what tasks are actually suitable for LLMs. Used poorly, they'll gut our abilities to think thoughtfully. The push, IMO, should be for using them for verification and clarification, but not for replacements in understanding and creativity.

Example: Do the problem sets yourself. If you're getting questions wrong, dig deeper with an AI assistant to find gaps in your knowledge. Do NOT let the AI do the problem sets first.

I think it was similar to how we used calculators in school in the 2010s at least. We learned the principles behind the formulae and how to do them manually, before introducing the calculators to abstract the usage of the tools.

I've let that core principle shape some of how we're designing our paper-reading assistant, but still thinking through the UX patterns -- https://openpaper.ai/blog/manifesto.

replies(1): >>44017563 #
2. ay ◴[] No.44017563[source]
Agreed, and the analogy with calculators is very apt.