At what point does LLM output become more expensive than search? For example, the author made protocollie. Could the LLM gather all similar apps that are already written to solve the prompt that birthed it,, and then simply provide THOSE apps iinstead of figuring out how to code it anew? Sounds like this could be a moat that only hyperscalars could implement and woould reduce their energy requirements drastically.