←back to thread

Be Aware of the Makefile Effect

(blog.yossarian.net)
431 points thunderbong | 1 comments | | HN request time: 0.207s | source
Show context
mattbillenstein ◴[] No.42663931[source]
I think Makefile is maybe the wrong analogy - the problem with most people and makefiles is they write so few of them, the general idea of what make does is at hand, but the muscle memory of how to do it from scratch is not.

But, point taken - I've seen so much code copy-pasta'd from the web, there will be like a bunch of dead stuff in it that's actually not used. A good practice here is to keep deleting stuff until you break it, then put whatever that was back... And delete as much as possible - certainly everything you're not using at the moment.

replies(1): >>42664075 #
sumanthvepa ◴[] No.42664075[source]
This is exactly the problem I face with many tools, Makefiles, KVM setups, docker configurations, CI/CD pipelines. My solution so far has been to create a separate repository with all my notes, shell script example programs etc, for these tool, libraries or frameworks. Every time I have to use these tools, I refer to my notes to refresh my memory, and if I learn something new in the process, I update the notes. I can even point an LLM at it now and ask it questions.

The repository is personal, and contains info on tools that are publicly available.

I keep organisation specific knowledge in a similar but separate repo, which I discard when my tenure with a client or employer ends.

replies(1): >>42664199 #
parasti ◴[] No.42664199[source]
What if your client comes back?

On a more practical note, what structure, formats and tools do you use that enable you to feed it to an LLM?

replies(1): >>42664338 #
1. sumanthvepa ◴[] No.42664338[source]
I'm usually contractually obligated to destroy all client IP that I may posses at the end of an engagement. My contracts usually specify that I will retain engagement specific information for a period of six months beyond the end of the contract. If they come back within that time, then I'll have prior context. Otherwise it's gone. Occasionally, a client does come back after a year or two, but most of the knowledge would have been obsolete and outdated anyway.

As for LLMs. I have a couple of python scripts that concatenate files in the repo into a context that I pass to Google's Gemini API or Google AI studio, mostly the latter. It can get expensive in some situations. I don't usually load the whole repository. And I keep the chat context around so I can keep asking question around the same topic.