Synt-E

(github.com)
1 points NeuroTinkerLab | 3 comments | | HN request time: 0.39s | source
1. NeuroTinkerLab ◴[] No.46194790[source]
Natural language is costing you money. I created Synt-E, a simple protocol that cuts LLM token usage by up to 80% with structured commands. Run it locally with @ollama_ai. GitHub: https://github.com/NeuroTinkerLab/synt-e-project #AI #DevTools #PromptEngineering #Ollama #LLM
2. NeuroTinkerLab ◴[] No.46195282[source]
Hi HN, I built this because I was frustrated with how difficult it is to get local LLMs to follow specific instructions without being overly "helpful". When I asked a model to translate a request to write code, it would just write the code. So, I created Synt-E, a simple protocol to turn natural language into dense, unambiguous key:value commands. The goal is to reduce token usage, lower latency, and make interactions with AI agents more reliable and testable. The core of the project is a Python script that uses a local LLM (via Ollama) to act as a "compiler". The most interesting finding was that "rawer" models like GPT-OSS were far more obedient to the system prompt than heavily-instructed models like Llama 3. It's all open-source and runs 100% locally. I'd love to hear your feedback. The GitHub link is the main post URL.