Not while they need even the slightest amount of supervision/review.
There is a real return of investment in co-workers over time, as they get better (most of the time).
Now, I don't mind engaging in a bit of Sisyphean endeavor using an LLM, but remember that the gods were kind enough to give him just one boulder, not 10 juggling balls.
I say that having not tried this work flow at all, so what do I know? I mostly only use Claude Code to bounce questions off of and ask it to do reviews of my work, because I still haven't had that much luck getting it to actually write code that is complete and how I like.
I can pay full attention to the change I'm making right now, while having a couple of coding agents churning in the background answering questions like:
"How can I resolve all of the warnings in this test run?"
Or
"Which files do I need to change when working on issue #325?"
I also really like the "Send out a scout" pattern described in https://sketch.dev/blog/seven-prompting-habits - send an agent to implement a complex feature with no intention of actually using their code - but instead aiming to learn from which files and tests they updated, since that forms a useful early map for the actual work.
This is an advantage of async systems like Jules/Copilot, where you can send off a request and get on with something else. I also wonder if the response from CLI agents is also short enough that you can waste time staring at the loading bar, because context switching between replies is even more expensive.