That's a really interesting application of LLMs and manim.
My problem with it, as with LLMs in general, is that i can't fully trust it. This isn't a problem with topics you know well. But those 3b1b style tutorials are meant to build an understanding or mental model of a concept for people who don't know it well yet. I'm afraid that, if the output is (partially) wrong, watching such a video could be even harmful in understanding the concept, because your 'greenfield' brain may pick up pieces of it that made sense to it.