My take is, if she used AI to generate that, she didn't use a very good one. I don't think ChatGPT would make the grammar and clarity mistakes that you see in the image text.
I see this:
"should be exposed to many of these forms and models to strengthen understanding" - much better as "should be exposed to as many of these forms and models as possible to strengthen their understanding"
"it is mentioned that students should have experiencing understanding the..." - plainly wrong, better would be "it is mentioned that students should have experience understanding the..."
"time with initial gird models" -> "time with initial grid models"
And there are other lines that could be improved.
My opinion is, the only solution to this problem is to allow AI detectors to flag work, but that when a work is flagged, that flagging just triggers a face to face meeting between the student and the professor, where the student is required to show through discussion of the work that they understand it well enough to have written it.
However! Often the professor is too busy, or isn't smart enough to review the writing of the student carefully enough to determine whether the student really wrote it. What to do? Why of course: invent AI systems that are really good at interviewing students well enough to tell if they really wrote a piece of work. Yeah you laugh but it will happen some day soon enough.