←back to thread

219 points crazylogger | 1 comments | | HN request time: 3.893s | source
Show context
erlapso ◴[] No.42736336[source]
Super interesting approach! We've been working on the opposite - always getting your Unit tests written with every PR. The idea is that you don't have to bother running or writing them, you just get them delivered in your Github repo. You can check it out here https://www.codebeaver.ai
replies(2): >>42736448 #>>42737625 #
1. lolinder ◴[] No.42737625[source]
Test driven development is sequenced the way it is for a reason. Getting a failing test first builds confidence that the test is, you know, actually testing something. And the process of writing the tests is often where the largest amount of reasoning about design choices takes place.

Having an LLM generate the tests after you've already written the code for them is super counterproductive. Who knows whether those tests actually test anything?

I know this gets into "I wanted AI to do my laundry, not my art" territory, but a far more rational division of labor is for the humans to write the tests (maybe with the assistance of an autocomplete model) and give those as context for the AI. Humans are way better at thinking of edge cases and design constraints than the models are at this point in the game.