Hi HN,
I built this because I was frustrated with how difficult it is to get local LLMs to follow specific instructions without being overly "helpful". When I asked a model to translate a request to write code, it would just write the code.
So, I created Synt-E, a simple protocol to turn natural language into dense, unambiguous key:value commands. The goal is to reduce token usage, lower latency, and make interactions with AI agents more reliable and testable.
The core of the project is a Python script that uses a local LLM (via Ollama) to act as a "compiler". The most interesting finding was that "rawer" models like GPT-OSS were far more obedient to the system prompt than heavily-instructed models like Llama 3.
It's all open-source and runs 100% locally. I'd love to hear your feedback.
The GitHub link is the main post URL.