←back to thread

388 points pseudolus | 1 comments | | HN request time: 0.208s | source
Show context
apercu ◴[] No.43486266[source]
I'm guessing that the people who most espouse the virtues of AI do not "test" the output much and just let LLMs pump out errors.

I use LLM's daily, but as a tool to brainstorm, mostly, or to write small parts of scripts (e.g., shell, not TV shows). But everything has to be verified.

Last weekend I was using ChatGPT Music Teacher (or, trying to anyway) to prep some voice leading practices for guitar. I spent almost a half hour trying to get that model, then the base ChatGPT model to give correct information about inversions and the notes in the chords. It was laughably wrong over and over again.

It would misidentify chords, say that a chord had the base attributes of a triad (tonic, third, fifth) while giving me a chord shape that had the root twice, and a third, and calling that a second inversion. Or giving incorrect fret/note information.

If I didn't know theory and how intervals work on a guitar I would have been pretty screwed.

As it was, I wasted a half hour and never got anything usable.

I'm not saying that the technology isn't fairly amazing, but like, don't believe the hype.

replies(3): >>43488814 #>>43489592 #>>43494208 #
unclad5968 ◴[] No.43489592[source]
I've wasted so much time trying to get LLMs to help me code. One issue I have is that I can never seem to get the AI to say the word no. No matter what I ask, it will say "Absolutely! You can solve [impossible problem] like so...". At this point I basically use them as documentation search engines. Searching for things like "does this library have a function to do thing?". Gemini and deepseek seem to be good enough at that.

I've entirely given up on using LLMs for exploratory exercises.

replies(1): >>43491804 #
1. toxik ◴[] No.43491804[source]
Or the old asking a question Q1, getting wrong answer A1, explaining why it’s wrong with Q2, getting answer A2 that hyper-focuses on Q2 and misses important parts of Q1, restating Q1 you again obtain A1, repeat ad nauseam.