Oh I agree, producing text by llms which is expected to be produced by human is at least deceiving and probably plagiarising. It's also skipping some important work, if we're talking about some person trying to detect it at all, usually in education context.
Student don't have to perform research or study for the given task, they need to acquire an example of text suitable for reproducing their style, text structure, to create an impression of being produced by hand, so the original task could be avoided. You have to have at least one corpus of your own work for this to work, or an adequate substitute. And you still could reject works by their content, but we are specifically talking about llm smell.
I was talking about the task of detecting llm generated text which is incredibly hard if any effort is made, while some people have an impression that it's trivially easy. It leads to unfair outcomes while giving false confidence to e.g. teachers that llms are adequately accounted for.