Most active commenters

    ←back to thread

    119 points lsharkey602 | 11 comments | | HN request time: 0.967s | source | bottom
    Show context
    reedf1 ◴[] No.44423223[source]
    I think it is possible that the widespread introduction of ChatGPT will cause a brief hiatus on hiring due to the inelasticity of demand. For the sake of argument, imagine that ChatGPT makes your average developer 4x more productive. It will take a while before the expectation becomes that 4x more work is delivered. That 4x more work is scheduled in sprints. That 4x more features are developed. That 4x more projects are sold to clients/users. When the demand eventually catches up (if it exists), the hiring will begin again.
    replies(9): >>44423267 #>>44423282 #>>44423301 #>>44423329 #>>44423440 #>>44423459 #>>44423688 #>>44423878 #>>44424258 #
    1. TSiege ◴[] No.44423440[source]
    I am not asking this as a gotcha, but a genuine curiosity for you or other people who find AI is helping them in terms of multiples. What is your workflow like? Where do you lean on AI vs not? Is it agentic stuff is tab by cursor?

    I find AI helpful but no where near a multiplier in my day to day development experience. Converting a csv to json or vis-versa great, but AI writing code for me has been less helpful. Beyond boiler plate, it introduces subtle bugs that are a pain in the ass to deal with. For complicated things, it struggles and does too much and because I didn't write it I don't know where the bad spots are. And AI code review often gets hung up on nits and misses real mistakes.

    So what are you doing and what are the resources you'd recommend?

    replies(8): >>44423484 #>>44423651 #>>44423715 #>>44423749 #>>44423843 #>>44423996 #>>44424208 #>>44424679 #
    2. reedf1 ◴[] No.44423484[source]
    4x is a number I pulled out of thin air. I'm not sure I even yet believe there is a net positive effect of using AI on productivity. What I am sure about in my own workflow is that is saves me time writing boilerplate code - it is good at this for me. So I would say it has saved me time in the short-term. Now does not writing this boilerplate slow me down long-term? It's possible, I could forget how to do this myself, some part of my brain could atrophy (as the MIT study suggests). How it affects large teams, systems and the transfer of knowledge is also not clear.
    replies(2): >>44423702 #>>44423716 #
    3. dgfitz ◴[] No.44423702[source]
    I read this sentiment a lot, and it is true for me too as a completely average software engineer.

    Makes it seem like the actual problem to be solved is reducing the amount of boilerplate code that needs to be written, not using an LLM to do it.

    I'm not smart enough to write a language or modify one, so this opinion is strongly spoken, weakly held.

    4. ulrikrasmussen ◴[] No.44423715[source]
    I have the same experience as you. It has definitely increased the speed with which I can look up solutions to isolated problems, but for writing code using agents and coming up with designs, the speed is limited by the speed with which I as a human can perform code reviews. If I was surrounded by human 10x developers who wrote all the code for me and left it for me to review it, I doubt my output would be 4x.
    5. eru ◴[] No.44423716[source]
    I wouldn't be too worried about the atrophy. Or at least not much more than you already were: you get the same atrophy effect just from IDEs and compiler errors and warnings.

    To give a concrete example: I'm pretty good at doing Python coding on a whiteboard, because that's what I practiced for job interviews, and when I first learned Python I used Vim without setting up any Python integration.

    I'm pretty terrible at doing Rust on a whiteboard, because I only learned it when I had a decent IDE and later even AI support.

    Nevertheless, I don't think I'm a better programmer in Python.

    6. alyandon ◴[] No.44423749[source]
    I lean a bit on LLMs now for initial research/prototype work and it is quite a productivity boost vs random searches on the web. I generally do not commit the code they generate because they tend to miss subtle corner cases unless the prompts I give them are extremely detailed which is not super useful to me. If an LLM does produce something of sufficient quality to get committed I clearly mark it as (at least partially) LLM generated and fully reviewed by myself before I mash the commit button and put my name on it.

    Basically, I treat LLMs like a fairly competent unpaid intern and extend about the same level of trust to the output they produce.

    7. ninetyninenine ◴[] No.44423843[source]
    Don’t ask the agent to do something complex. Break it down into 10 manageable steps. You are the tester and verifier of each step.

    What you will find is that the agent is much more successful in this regard.

    The LLM has certain intrinsic abilities that match us and like us it cannot actually code 10,000 lines of code and have everything working in one go. It does better when you develop incrementally and verify each increment. The smaller the increments the better it performs.

    Unfortunately the chain of thought process doesn’t really do this. It can come up with steps, sometimes the steps are too big and it almost never properly verifies things are working after each increment. That’s why you have to put yourself in the loop here.

    Like allowing the computer to run test and verify an application works as expected on each step and to even come up with what verification means is a bit of what’s missing here and I think although this part isn’t automated yet, it can easily be automated where humans become less and less involved and distance themselves into a more and more supervisory role.

    replies(1): >>44423881 #
    8. alyandon ◴[] No.44423881[source]
    Spot on - that is exactly my experience when working with LLMs.
    9. SatvikBeri ◴[] No.44423996[source]
    I get very good results from Claude Code, something like a 3x. It's enough that my cofounders noticed and commented on it, and has had a lot of measurable results in terms of saving $ on infrastructure.

    The first thing I'll note is that Claude Code with Claude 4 has been vastly better than everything else for me. Before that it was more like a 5-10% increase in productivity.

    My workflow with Claude Code is very plain. I give it a relatively short prompt and ask it to create a plan. I iterate on the plan several times. I ask it to give me a more detailed plan. I iterate on that several times, then have Claude write it down and /clear to reset context.

    Then, I'll usually do one or more "prototype" runs where I implement a solution with relatively little attention to code quality, to iron out any remaining uncertainties. Then I throw away that code, start a new branch, and implement it again while babysitting closely to make sure the code is good.

    The major difference here is that I'm able to test out 5-10 designs in the time I would normally try 1 or 2. So I end up exploring a lot more, and committing better solutions.

    10. fcatalan ◴[] No.44424208[source]
    I use it a lot for reducing friction. When I procrastinate about starting something I ask the AI to come up with a quick plan. Maybe I'll just follow the first step, but it gets me going.

    Sometimes I´ll even go a bit crazy on this planning thing and do things a bit similar to what this guy shows: https://www.youtube.com/watch?v=XY4sFxLmMvw I tend to steer the process more myself, but typing whatever vague ideas are in my mind and ending up in minutes with a milestone and ticket list is very enabling, even if it isn´t perfect.

    I also do more "drive by" small improvements:

    - Annoying things that weren't important enough for a side quest writing a shell script, now have a shell script or an ansible playbook.

    - That ugly CSS in an internal tool untouched for 5 years? fixed in 1 minute.

    - The small prototype put into production with 0 documentation years ago? I ask an agentic tool to provide a basic readme and then edit it a bit so it doesn´t lie, well worth 15 minutes.

    I also give it a first shot at finding the cause of bugs/problems. Most of the time it doesn't work, but in the last week it found right away the cause of some long standing subtle problems we had in a couple places.

    I have also had sometimes luck providing it with single functions or modules that work but need some improvement (make this more DRY, improve error handling, log this or that...) Here I´m very conservative with the results because as you said it can be dangerous.

    So am I more productive? I guess so, I don't think 4x or even 2x, I don't think projects are getting done much faster overall, but stuff that wouldn't have been done otherwise is being done.

    What usually falls flat is trying to go on a more "vibe-coding" route. I have tried to come up with a couple small internal tools and things like that, and after promising starts, the agents just can't deal with the complexity without needing so much help that I'd just go faster by myself.

    11. ianm218 ◴[] No.44424679[source]
    I'm in the same boat of some of the other commenters using Claude Code but I have found it atleast a 2X in routine backend API development. Most updates to our existing APIs would be on the order of "add one more partner integration following the same interface here and add tests with the new response data". So it is pretty easy to give it to claude code, tell them where to put the new code, tell it how to test, and let it iterate on the tests. So something that may have taken a full afternoon or more to get done gets done much faster and often with a lot more test coverage.