←back to thread

388 points pseudolus | 1 comments | | HN request time: 1.407s | source
Show context
apercu ◴[] No.43486266[source]
I'm guessing that the people who most espouse the virtues of AI do not "test" the output much and just let LLMs pump out errors.

I use LLM's daily, but as a tool to brainstorm, mostly, or to write small parts of scripts (e.g., shell, not TV shows). But everything has to be verified.

Last weekend I was using ChatGPT Music Teacher (or, trying to anyway) to prep some voice leading practices for guitar. I spent almost a half hour trying to get that model, then the base ChatGPT model to give correct information about inversions and the notes in the chords. It was laughably wrong over and over again.

It would misidentify chords, say that a chord had the base attributes of a triad (tonic, third, fifth) while giving me a chord shape that had the root twice, and a third, and calling that a second inversion. Or giving incorrect fret/note information.

If I didn't know theory and how intervals work on a guitar I would have been pretty screwed.

As it was, I wasted a half hour and never got anything usable.

I'm not saying that the technology isn't fairly amazing, but like, don't believe the hype.

replies(3): >>43488814 #>>43489592 #>>43494208 #
1. kjkjadksj ◴[] No.43494208[source]
Its because chatgpt does not have the right answer. It has a trove of old forum posts with those keywords strung together in them and guesses what word is liable to be next based on the dataset.

You can see how this is more like listening to a crowded room and relaying random words than actually learning through understand ideas and building off of them.