I like the idea but Firecrawl and GPT4o is quite heavy. I use https://github.com/unclecode/crawl4ai in some projects, it works very well without these dependencies and is modular so you can use LLMs but do not have to :)
I just launched llms.txt Generator, a tool that transforms any website into a clean, structured text file optimized for feeding to LLMs. You can learn more about the standard at https://llmstxt.org.
Here’s how it works under the hood:
1. We use Firecrawl, our open-source scraper, to fetch the full site, handling JavaScript-heavy pages and complex structures. 2. The markdown content is parsed and then the title and description are extracted using GPT-4o-mini. 3. The everything is combined and the result is a lightweight llms.txt file that you can paste into any LLM.
Let me know what you think!