Why wouldn't it?
I have used agentic coding tools to solve problems that have literally never been solved before, and it was the AI, not me, that came up with the answer.
If you look under the hood, the multi-layered percqptratrons in the attention heads of the LLM are able to encode quite complex world models, derived from compressing its training set in a which which is formally as powerful as reasoning. These compressed model representations are accessible when prompted correctly, which express as genuinely new and innovative thoughts NOT in the training set.
Would you show us? Genuinely asking
Ask the best available models -- emphasis on models -- for help designing the text editor at a structural rather than functional level first, being specific about what you want and emphasizing component-level test whenever possible, and only then follow up with actual code generation, and you'll get much better results.
Obviously no model is going to one-shot something like a full text editor, but there's an ocean of difference between defining vibe coding as prompting "Make me a text editor" versus spending days/weeks going back and forth on architecture and implementation with a model while it's implementing things bottom-up.
Both seem like common definitions of the term, but only one of them will _actually_ work here.
It’s happened now that a couple of times it pops out novel results. In computational chemistry, machine learned potentials trained with transformer models have already resulted in publishable new chemistry. Those papers are t out yet, but expect them within a year.