←back to thread

469 points samuelstros | 3 comments | | HN request time: 0.762s | source
Show context
gervwyk ◴[] No.44998759[source]
We’re considering building a coding agent for Lowdefy[1], a framework that lets you build web apps with YAML config.

For those who’ve built coding agents: do you think LLMs are better suited for generating structured config vs. raw code?

My theory is that agents producing valid YAML/JSON schemas could be more reliable than code generation. The output is constrained, easier to validate, and when it breaks, you can actually debug it.

I keep seeing people creating apps with vibe coder tools but then get stuck when they need to modify the generated code.

Curious if others think config-based approaches are more practical for AI-assisted development.

[1] https://github.com/lowdefy/lowdefy

replies(6): >>44998775 #>>44998776 #>>44998799 #>>44998886 #>>45004566 #>>45007266 #
1. hamandcheese ◴[] No.44998886[source]
> easier to validate

This is essential to productivity for humans and LLMs alike. The more reliable your edit/test loop, the better your results will be. It doesn't matter if it's compiling code, validating yaml, or anything else.

To your broader question. People have been trying to crack the low-code nut for ages. I don't think it's solvable. Either you make something overly restrictive, or you are inventing a very bad programming language which is doomed to fail because professional coders will never use it.

replies(1): >>44998951 #
2. gervwyk ◴[] No.44998951[source]
Good point. i’m making the assumption that if the LLM has a more limited feature space to produce as output, then the output is more predictable, and thus faster to comprehend changes. Similar to when devs use popular libraries, there is a well known abstraction, therefore less “new” code to comprehend as i see familiar functions, making the code predictable to me.
replies(1): >>45023906 #
3. hamandcheese ◴[] No.45023906[source]
I think we are essentially describing the same thing. You just want to achieve it by constraining the input space at a significantly higher level (yaml schema defines the output space instead of a compiler and/or test suite).

I still think you'll be at a significant disadvantage since the LLM has been trained on millions of lines of all mainstream languages, and 0 lines of gervwyks funny yaml lang.