I use LLM's daily, but as a tool to brainstorm, mostly, or to write small parts of scripts (e.g., shell, not TV shows). But everything has to be verified.
Last weekend I was using ChatGPT Music Teacher (or, trying to anyway) to prep some voice leading practices for guitar. I spent almost a half hour trying to get that model, then the base ChatGPT model to give correct information about inversions and the notes in the chords. It was laughably wrong over and over again.
It would misidentify chords, say that a chord had the base attributes of a triad (tonic, third, fifth) while giving me a chord shape that had the root twice, and a third, and calling that a second inversion. Or giving incorrect fret/note information.
If I didn't know theory and how intervals work on a guitar I would have been pretty screwed.
As it was, I wasted a half hour and never got anything usable.
I'm not saying that the technology isn't fairly amazing, but like, don't believe the hype.