←back to thread

97 points marxism | 1 comments | | HN request time: 0.203s | source

I've been trying to articulate why coding feels less pleasant now.

The problem: You can't win anymore.

The old way: You'd think about the problem. Draw some diagrams. Understand what you're actually trying to do. Then write the code. Understanding was mandatory. You solved it.

The new way: The entire premise of AI coding tools is to automate the thinking, not just the typing. You're supposed to describe a problem and get a solution without understanding the details. That's the labor-saving promise.

So I feel pressure to always, always, start by info dumping the problem description to AI and gamble for a one-shot. Voice transcription for 10 minutes, hit send, hope I get something first try, if not hope I can iterate until something works. And when even something does work = zero satisfaction because I don't have the same depth of understanding of the solution. Its no longer my code, my idea. It's just some code I found online. `import solution from chatgpt`

If I think about the problem, I feel inefficient. "Why did you waste 2 hours on that? AI would've done it in 10 minutes."

If I use AI to help, the work doesn't feel like mine. When I show it to anyone, the implicit response is: "Yeah, I could've prompted for that too."

The steering and judgment I apply to AI outputs is invisible. Nobody sees which suggestions I rejected, how I refined the prompts, or what decisions I made. So all credit flows to the AI by default.

The result: Nothing feels satisfying anymore. Every problem I solve by hand feels too slow. Every problem I solve with AI feels like it doesn't count. There's this constant background feeling that whatever I just did, someone else would've done it better and faster.

I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle. It bothers me that my reaction to these blog posts has changed so much. 3 years ago I would be bookmarking a blog post to try it out for myself that weekend. Now those 200 lines of simple code feels only one sentence prompt away and thus waste of time.

Am I alone in this?

Does anyone else feel this pressure to skip understanding? Where thinking feels like you're not using the tool correctly? In the old days, I understood every problem I worked on. Now I feel pressure to skip understanding and just ship. I hate it.

Show context
unnouinceput ◴[] No.45572680[source]
No, it didn't. Or rather it did for run of the mill coder camp wanna be programmer. Like you sound you are one. For me it's the opposite. That's because I don't do run of the mill web pages, my work instead is very specific and the so called "AI" (which is actually just googling with extra spice on top, I don't think I'll see true AI in my lifetime) is too stupid to do it. So I have to break it down into several sessions giving only partial details (divide and conquer) otherwise will confabulate stupid code.

Before this "AI" I had to do the mundane tasks of boilerplate. Now I don't. That's a win for me. The grand thinking and the whole picture of the projects is still mine, and I keep trying to give it to "AI" from time to time, except each time it spits BS. Also it helps that as a freelancer my stuff gets used by my client directly in production (no manager above, that has a group leader, that has a CEO, that has client's IT department, that finally has the client as final user). That's another good feeling. Corporations with layers above layers are the soul sucking of programming joy. Freelancing allowed me to avoid that.

replies(1): >>45573255 #
marxism ◴[] No.45573255[source]
I'm curious: could you give me an example of code that AI can't help with?

I ask because I've worked across different domains: V8 bytecode optimizations, HPC at Sandia (differential equations on 50k nodes, adaptive mesh refinement heuristics), resource allocation and admission control for CI systems, custom network UDP network stack for mobile apps https://neumob.com/. In every case in my memory, the AI coding tools of today would have been useful.

You say your work is "very specific" and AI is "too stupid" for it. This just makes me very curious what does that look like concretely? What programming task exists that can't be decomposed into smaller problems?

My experience as an engineer is that I'm already just applying known solutions that researchers figured out. That's the job. Every problem I've encountered in my professional life was solvable - you decompose it, you research up an algorithm (or an approximation), you implement it. Sometimes the textbook says the math is "graduate-level" but you just... read it and it's tractable. You linearize, you approximate, you use penalty barrier methods. Not an theoretically optimal solution, but it gets the job done.

I don't see a structural difference between "turning JSON into pretty HTML" and using OR-tools to schedule workers for a department store. Both are decomposable problems. Both are solvable. The latter just has more domain jargon.

So I'm asking: what's the concrete example? What code would you write that's supposedly beyond this?

I frequently see this kind of comment in AI threads that there is more sophisticated kinds of AI proof programming out there.

Let me try to clarify another way. Are you claiming that say 50% of the total economic activity is beyond AI? or is some sort of niche role that only contributes 3% to GDP? Because its very different if this "difficult" job is everywhere or only in a few small locations.

replies(1): >>45582839 #
unnouinceput ◴[] No.45582839[source]
Did you played Assassin's Creed Valhalla? In it there is a board game called Orlog. Go and make that game to be multiplayer so you can play with your spouse/son/friend. Come back to me once you're done and we see then how much time it took you.

Or remake the Gwent board game that is in Witcher 3.

Make either of that mobile game so you can enjoy in the same room with the person you love. Also make sure you can make multiple decks (for Gwent) / multiple starting God (for Orlog) and you just select the start and hit "ready to play" (or whatever). You'll know what I mean once you understand either of these games.

Good luck with having any of them made in one session and not breaking the big picture in million of pieces and you keep the big picture in your head.

replies(2): >>45586485 #>>45586632 #
1. marxism ◴[] No.45586632[source]
I'm trying to understand where our viewpoints differ, because I suspect we have fundamentally different mental models about where programming difficulty actually lives.

It sounds like you believe the hard part is decomposing problems - breaking them into subproblems, managing the "big picture," keeping the architecture in your head. That this is where experience and skill matter.

My mental model is the opposite: I see problem decomposition as the easy part - that's just reasoning about structure. You just keep peeling the onion until you hit algorithmically irreducible units. The hard part was always at the leaf nodes of that tree.

Why I think decomposition is straightforward:

People switch jobs and industries constantly. You move from one company to another, one domain to another, and you're productive quickly. How is that possible if decomposition requires deep domain expertise?

I think it's because decomposition is just observing how things fit together in reality. The structure reveals itself when you look at the problem.

Where I think the actual skill lived:

The leaf nodes. Not chipping away until you are left with "this is a min-cut problem" - anyone off the street can do that. The hard part was:

- Searching for the right algorithm/approach for your specific constraints

- Translating that solution into your project's specific variables, coordinate system, and bookkeeping

Those two things - search and translation - are precisely what AI excels at.

What I think AI changed:

I could walk into any building on Stanford campus right now, tap a random person (no CS required!) on the shoulder, and they could solve those leaf problems using AI tools. It no longer requires years of experience and learned skills.

I think this explains our different views: If you believe the skill is in decomposition (reasoning about structure), then AI hasn't changed much. But if the skill was always in search and translation at the leaf nodes (my view), then AI has eliminated the core barrier that required job-specific expertise.

Does this capture where we disagree? Am I understanding your position correctly?