←back to thread

287 points moonka | 9 comments | | HN request time: 0.001s | source | bottom
Show context
rqtwteye ◴[] No.43562536[source]
I have been in the workforce for almost 30 years now and I believe that everybody is getting more squeezed so they don’t have the time or energy to do a proper job. The expectation is to get it done as quickly as possible and not do more unless told so.

In SW development in the 90s I had much more time for experimentation to figure things out. In the last years you often have some manager where you basically have to justify every thing you do and always a huge pile of work that never gets smaller. So you just hurry through your tasks.

I think google had it right for a while with their 20% time where people could do wanted to do. As far as I know that’s over.

People need some slack if you want to see good work. They aren’t machines that can run constantly on 100% utilization.

replies(25): >>43562590 #>>43562601 #>>43562738 #>>43562748 #>>43562796 #>>43562875 #>>43562911 #>>43562955 #>>43562996 #>>43563116 #>>43563121 #>>43563253 #>>43563309 #>>43563487 #>>43563727 #>>43563795 #>>43563837 #>>43563965 #>>43563995 #>>43564861 #>>43567850 #>>43569250 #>>43569941 #>>43574512 #>>43579456 #
Sparkyte ◴[] No.43562911[source]
Definitely squeezed.

They say AI, but AI isn't eliminating programming. I've wrote a few applications with AI assistance. It probably would've been faster if I wrote it myself. The problem is that it doesn't have context and wildly assumes what your intentions are and cheats outcomes.

It will replace juniors for that one liner, it won't replace a senior developer who knows how to write code.

replies(2): >>43562958 #>>43592816 #
NERD_ALERT ◴[] No.43562958[source]
I felt this way with Github Copilot but I started using Cursor this week and it genuinely feels like a competent pair programmer.
replies(4): >>43563011 #>>43563040 #>>43563270 #>>43566237 #
jdcasale ◴[] No.43563270[source]
I recently tried Cursor for about a week and I was disappointed. It was useful for generating code that someone else has definitely written before (boilerplate etc), but any time I tried to do something nontrivial, it failed no matter how much poking, prodding, and thoughtful prompting I tried.

Even when I tried to ask it for stuff like refactoring a relatively simple rust file to be more idiomatic or organized, it consistently generated code that did not compile and was unable to fix the compile errors on 5 or 6 repromptings.

For what it's worth, a lot of SWE work technically trivial -- it makes this much quicker so there's obviously some value there, but if we're comparing it to a pair programmer, I would definitely fire a dev who had this sort of extremely limited complexity ceiling.

It really feels to me (just vibes, obviously not scientific) like it is good at interpolating between things in its training set, but is not really able to do anything more than that. Presumably this will get better over time.

replies(1): >>43565115 #
1. dughnut ◴[] No.43565115[source]
If you asked a junior developer to refactor a rust program to be more idiomatic, how long would you expect that to take? Would you expect the work to compile on the first try?

I love Cline and Copilot. If you carefully specify your task, provide context for uncommon APIs, and keep the scope limited, then the results are often very good. It’s code completion for whole classes and methods or whole utility scripts for common use cases.

Refactoring to taste may be under specified.

replies(2): >>43568035 #>>43571813 #
2. Retric ◴[] No.43568035[source]
What matters here is the communication overhead not how long between responses. If I’m indefinitely spending more time handholding a jr dev than they save me eventually I just fire em, same with code gen.
replies(1): >>43570883 #
3. djmips ◴[] No.43570883[source]
A big difference is that the jr. dev is learning compared to the AI who is stuck at whatever competence was baked in from the factory. You might be more patient with the jr if you saw positive signs that the handholding was paying off.
replies(2): >>43571121 #>>43576581 #
4. Retric ◴[] No.43571121{3}[source]
That was my point, though I may not have been clear.

Most people do get better over time, but for those who don’t (or LLM’s) it’s just a question of if their current skills are a net benefit.

I do expect future AI to improve. My expectation is it’s going to be a long slow slog just like with self driving cars etc, but novel approaches regularly turn extremely difficult problems into seemingly trivial exercises.

5. jdcasale ◴[] No.43571813[source]
"If you asked a junior developer to refactor a rust program to be more idiomatic, how long would you expect that to take? Would you expect the work to compile on the first try?"

The purpose of giving that task to a junior dev isn't to get the task done, it's to teach them -- I will almost always be at least an order order of magnitude faster than a junior for any given task. I don't expect juniors to be similarly productive to me, I expect them to learn.

The parent comment also referred to a 'competent pair programmer', not a junior dev.

My point was that for the tasks that I wanted to use the LLM, frequently there was no amount of specificity that could help the model solve it -- I tried for a long time, and generally if the task wasn't obvious to me, the model generally could not solve it. I'd end up in a game of trying to do nondeterministic/fuzzy programming in English instead of just writing some code to solve the problem.

Again I agree that there is significant value here, because there is a ton of SWE work that is technically trivial, boring, and just eats up time. It's also super helpful as a natural-language info-lookup interface.

replies(1): >>43576635 #
6. dughnut ◴[] No.43576581{3}[source]
I would be more patient with an AI that only costs me a fraction of a cent an hour.
replies(1): >>43595640 #
7. dughnut ◴[] No.43576635[source]
Personally, I think training someone on the client’s dime is pretty unethical.
replies(1): >>43628863 #
8. Retric ◴[] No.43595640{4}[source]
The value of my time dwarfs the cost of using an AI.

That said, you are underestimating AI costs if you think it works out to a fraction of a cent per hour.

9. jdcasale ◴[] No.43628863{3}[source]
You have misunderstood something here.

I (like a very large plurality, maybe even a majority, of devs) do not work for a consulting firm. There is no client.

I've done consulting work in the past, though. Any leader who does not take into account (at least to some degree) relative educational value of assignments when staffing projects is invariably a bad leader.

All work is training for a junior. In this context, the idea that you can't ethically train a junior "on a client's dime" is exactly equivalent to saying that you can't ever ethically staff juniors on a consulting project -- that's a ridiculous notion. The work is going to get done, but a junior obviously isn't going to be as fast as I am at any task.