←back to thread

192 points imasl42 | 2 comments | | HN request time: 0.464s | source
Show context
its-kostya ◴[] No.45311805[source]
Code review is part of the job, but one of the least enjoyable parts. Developers like _writing_ and that gives the most job satisfaction. AI tools are helpful, but inherently increases the amount of code we have to review with more scrutiny than my colleagues because of how unpredictable - yet convincing - it can be. Why did we create tools that do the fun part and increase the non-fun part? Where are the "code-review" agents at?
replies(9): >>45311852 #>>45311876 #>>45311926 #>>45312027 #>>45312147 #>>45312307 #>>45312348 #>>45312499 #>>45362757 #
jmcodes ◴[] No.45312348[source]
Maybe I'm weird but I don't actually enjoy the act of _writing_ code. I enjoy problem solving and creating something. I enjoy decomposing systems and putting them back together in a better state, but actually manually typing out code isn't something I enjoy.

When I use an LLM to code I feel like I can go from idea to something I can work with in much less time than I would have normally.

Our codebase is more type-safe, better documented, and it's much easier to refactor messy code into the intended architecture.

Maybe I just have lower expectations of what these things can do but I don't expect it to problem solve. I expect it to be decent at gathering relevant context for me, at taking existing patterns and re-applying them to a different situation, and at letting me talk shit to it while I figure out what actually needs to be done.

I especially expect it to allow me to be lazy and not have to manually type out all of that code across different files when it can just generate them it in a few seconds and I can review each change as it happens.

replies(3): >>45312939 #>>45317830 #>>45336663 #
1. skydhash ◴[] No.45312939[source]
Code is the ultimate fact checker, where what you write is what gets done. Specs are well written wishes.
replies(1): >>45313237 #
2. jmcodes ◴[] No.45313237[source]
Yes, hence tests, linters, and actually verifying the changes it is making. You can't trust anything the LLM writes. It will hallucinate or misunderstand something at some point if your task gets long. But that's not the point, I'm not asking it to solve things for me.

I'm using it to get faster at building my own understanding of the problem, what needs to get done, and then just executing the rote steps I've already figured out.

Sometimes I get lucky and the feature is well defined enough just from the context gathering step that the implementation is literally just be hitting the enter key as I read the edits it wants to make.

Sometimes I have to interrupt it and guide it a bit more as it works.

Sometimes I realize I misunderstood something as it's thinking about what it needs to do.

One-shotting or asking the LLM to think for you is the worst way to use them.