←back to thread

1479 points sandslash | 2 comments | | HN request time: 0.45s | source
Show context
abdullin ◴[] No.44316210[source]
Tight feedback loops are the key in working productively with software. I see that in codebases up to 700k lines of code (legacy 30yo 4GL ERP systems).

The best part is that AI-driven systems are fine with running even more tight loops than what a sane human would tolerate.

Eg. running full linting, testing and E2E/simulation suite after any minor change. Or generating 4 versions of PR for the same task so that the human could just pick the best one.

replies(7): >>44316306 #>>44316946 #>>44317531 #>>44317792 #>>44318080 #>>44318246 #>>44318794 #
OvbiousError ◴[] No.44316946[source]
I don't think the human is the problem here, but the time it takes to run the full testing suite.
replies(6): >>44317032 #>>44317123 #>>44317166 #>>44317246 #>>44317515 #>>44318555 #
1. londons_explore ◴[] No.44317166[source]
The full test suite is probably tens of thousands of tests.

But AI will do a pretty decent job of telling you which tests are most likely to fail on a given PR. Just run those ones, then commit. Cuts your test time from hours down to seconds.

Then run the full test suite only periodically and automatically bisect to find out the cause of any regressions.

Dramatically cuts the compute costs of tests too, which in big codebase can easily become whole-engineers worth of costs.

replies(1): >>44318168 #
2. tele_ski ◴[] No.44318168[source]
It's an interesting idea, but reactive, and could cause big delays due to bisecting and testing on those regressions. There's the 'old' saying that the sooner the bug is found the cheaper it is to fix, seems weird to intentionally push finding side effect bugs later in the process because faster CI runs. Maybe AI will get there but it seems too aggressive right now to me. But yeah, put the automation slider where you're comfortable.