←back to thread

214 points Brajeshwar | 1 comments | | HN request time: 0.198s | source
Show context
marcyb5st ◴[] No.45087065[source]
In terms of LOCs maybe, in terms of importance I think is much less. At least that's how I use LLMs.

While I understand that <Enter model here> might produce the meaty bits as well, I believe that having a truck factor of basically 0 (since no-one REALLY understands the code) is a recipe for a disaster and I dare say long term maintainability of a code base.

I feel that you need to have someone in any team that needs to have that level of understanding to fix non trivial issues.

However, by all means, I use the LLM to create all the scaffolding, test fixtures, ... because that is mental energy that I can use elsewhere.

replies(2): >>45087181 #>>45090543 #
fergie ◴[] No.45090543[source]
> test fixtures

I'm curious- how does the AI know what you want?

replies(1): >>45091100 #
1. marcyb5st ◴[] No.45091100[source]
I use CodeCompanion on neovim. My workflow for text fixtures is basically:

""" Given these files: file1, file2, ... (these are pulled entirely into the LLM context)

Create a test fixture by creating a type that implements the trait A and should use an in memory SQLite DB, another one that implements Trait B, ...

"""

Of course there is a bit of back and forth, but I find that using Interfaces/Traits/ABCs extensively makes LLMs perform better at these tasks (but I believe it is a nice side-effect of having more testable code to begin with).

However, wiring with IoC frameworks are a bit of a hit and miss to be honest so often I still have to do these parts manually.