←back to thread

176 points marv1nnnnn | 4 comments | | HN request time: 1.14s | source
Show context
gk1 ◴[] No.43995539[source]
92% reduction is amazing. I often write product marketing materials for devtool companies and load llms.txt into whatever AI I’m using to get accurate details and even example code snippets. But that instantly adds 60k+ tokens which, at least in Google AI Studio, annoyingly slows things down. I’ll be trying this.

Edit: After a longer look, this needs more polish. In addition to key question raised by someone else about quality, there are signs of rushed work here. For example the critical llm_min_guideline.md file, which tells the LLM how to interpret the compressed version, was lazily copy-pasted from an LLM response without even removing the LLM's commentary:

"You are absolutely right! My apologies. I was focused on refining the detail of each section and overlooked that key change in your pipeline: the Glossary (G) section is no longer part of the final file..."

Doesn't exactly instill confidence.

Really nice idea. I hope you keep going with this as it would be a very useful utility.

replies(1): >>43996185 #
1. marv1nnnnn ◴[] No.43996185[source]
Oof, you nailed it. Thanks for the sharp eyes on llm_min_guideline.md. That's a clear sign of me pushing this out too quickly to get feedback on the core concept, and I didn't give the supporting docs the attention they deserve. My bad. Cleaning that up, and generally adding more polish, is a top priority. Really appreciate you taking the time to look deeper and for the encouragement to keep going. It's very helpful!
replies(1): >>43999971 #
2. ricardobeat ◴[] No.43999971[source]
Wait, are you also using an LLM to respond on Hacker News?
replies(2): >>44000994 #>>44003543 #
3. marv1nnnnn ◴[] No.44000994[source]
haha, is it that obvious? I only let LLM polished this one. I am not native speaker and I was trying to be polite ^-^
4. marci ◴[] No.44003543[source]
Damn... I saw your sentence starting with "Wait', and immediately thought "reasonning llm?"