←back to thread

310 points skarat | 4 comments | | HN request time: 0.77s | source

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?
Show context
dexterlagan ◴[] No.43962306[source]
You need none of these fancy tools if you iterate over specs instead of iterating over code. I explain it all in here: https://www.cleverthinkingsoftware.com/spec-first-developmen...
replies(3): >>43962332 #>>43962530 #>>43984206 #
1. WA ◴[] No.43984206[source]
I played around with your suggestion for a day or two now. While I'm intrigued, there are some real-world issues with this approach:

- The same spec is processed by the same LLM differently when implementing from scratch. This can maybe mitigated somewhat by adjusting the temperature slider. But generally speaking, the same spec won't give the same result unless you are very specific.

- Same if you use different LLMs. The same spec can give entirely different results for different LLMs.

- This can probably mitigated somewhat by getting more specific in the spec, but at some point, it is so specific as being the code itself. Unless of course you don't care that much about the details. But if you don't, you get a slightly different app every time you implement from scratch.

- Gemini 2.5 pro has "reasoning" capabilities and introduces a lot of "thinking" tokens into the context. Let's say you start with a single line spec and iterate from there. Gemini will give you a more detailed spec based on its thinking process. But if you then take the new thinking-process spec as a new starting point for the next iteration of the spec, you get even more thinking. In short, the spec gets automatically expanded by the way of "thinking" with reasoning models.

- Produced code can have small bugs, but they are not really worth to put in the spec, because they are an implementation detail.

I'll keep experimenting with it, but I don't think this is the holy grail of AI assisted coding.

replies(2): >>43993024 #>>44012501 #
2. dexterlagan ◴[] No.43993024[source]
Thanks for the feedback! Agreed, the one problem with the approach is reproducibility. It can be mitigated by going temp. 0, and detailing further the specs. The one method that nearly completely solves this problem is the hybrid approach: Write detailed specs, feed them to the LLM, get an MVP (or module etc); fix any and all issues found with the MVP, implement missing/new features; ask the LLM to update the specs to take the changes in account, also recording the lessons learned - and maximize reproducibility. Treat the latest specs + code package as a checkpoint you can always resume work from.
replies(1): >>43993989 #
3. WA ◴[] No.43993989[source]
I'll give it a try, thanks.

Edit: Do you use reasoning models that introduce way more tokens into the context at all or prefer simpler models?

4. DANmode ◴[] No.44012501[source]
Why would you expect, or even need, the exact same methods used every time,

when you do not expect this from two human devs (junior OR senior) both given the same task?