←back to thread

97 points marxism | 1 comments | | HN request time: 0s | source

I've been trying to articulate why coding feels less pleasant now.

The problem: You can't win anymore.

The old way: You'd think about the problem. Draw some diagrams. Understand what you're actually trying to do. Then write the code. Understanding was mandatory. You solved it.

The new way: The entire premise of AI coding tools is to automate the thinking, not just the typing. You're supposed to describe a problem and get a solution without understanding the details. That's the labor-saving promise.

So I feel pressure to always, always, start by info dumping the problem description to AI and gamble for a one-shot. Voice transcription for 10 minutes, hit send, hope I get something first try, if not hope I can iterate until something works. And when even something does work = zero satisfaction because I don't have the same depth of understanding of the solution. Its no longer my code, my idea. It's just some code I found online. `import solution from chatgpt`

If I think about the problem, I feel inefficient. "Why did you waste 2 hours on that? AI would've done it in 10 minutes."

If I use AI to help, the work doesn't feel like mine. When I show it to anyone, the implicit response is: "Yeah, I could've prompted for that too."

The steering and judgment I apply to AI outputs is invisible. Nobody sees which suggestions I rejected, how I refined the prompts, or what decisions I made. So all credit flows to the AI by default.

The result: Nothing feels satisfying anymore. Every problem I solve by hand feels too slow. Every problem I solve with AI feels like it doesn't count. There's this constant background feeling that whatever I just did, someone else would've done it better and faster.

I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle. It bothers me that my reaction to these blog posts has changed so much. 3 years ago I would be bookmarking a blog post to try it out for myself that weekend. Now those 200 lines of simple code feels only one sentence prompt away and thus waste of time.

Am I alone in this?

Does anyone else feel this pressure to skip understanding? Where thinking feels like you're not using the tool correctly? In the old days, I understood every problem I worked on. Now I feel pressure to skip understanding and just ship. I hate it.

Show context
mhaberl ◴[] No.45572798[source]
>The new way: The entire premise of AI coding tools is to automate the thinking, not just the typing.

That’s the promise, but not the reality :) Try this: pick a random startup idea from the internet, something that would normally take 3–6 months to build without AI. Now go all in with AI. Don’t worry about enjoyment; just try to get it done.

You’ll notice pretty quickly that it doesn’t get you very far. Some things go faster, until you hit a wall (and you will hit it). Then you either have to redo parts or step back and actually understand what the AI built so far, so you can move forward where it can’t.

>I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle.

It was "stupid" then - better alternatives already existed, but you do it to learn.

> Am I alone in this?

absolutly not but understand it is just a tool, not a replacement, use it and you will soon find the joy again, it is there

replies(1): >>45576620 #
1. saxenaabhi ◴[] No.45576620[source]
That's not my experience. Over christmas I re wrote a restaurant POS application from laravel to vue/wrangler using bolt + chatgpt. My exact steps:

1) I took my db schema and got chatGPT to convert it to typescript types and stub data.

2) Uploaded these types to bolt and asked it one by one to create vue components to display this data(Catalog screen, Payment Dialogs, Tables page etc) and to use fetcher functions that return stub data.

3) Finally I asked it replace stub data with supabase.rpc calls, and create postgres functions to serve this data.

While I had most of the app done in a few days, testing, styling and adding bug fixing took a month.

Some minor stuff was done manually by me: Receipt printer integration integration because bolt wasn't good at epson xml or related libraries that time.

Finally we released early feb and we received extremely good feedback from our customers.

However now I'm using claude and even higher percentage of code is generated by it now. Our feature velocity is also great. After launch we have added following features in 6m

1) Split Table Payments 2) Payment Terminal Integration 3) Visual Floor plan viewer 4) Mobile POS for waiters without tablet 5) Reports, Dashboard, Import/Export 6) Loyalty programs with many different types of programs 7) Self-service Webshop with realtime group ordering 8) Improved tax handling 9) Multicourse orders("La'suite") 10) Many other smaller features

This would be very hard to achieve without AI for most one-person engineering teams. Although tbf not impossible.

> The new way: The entire premise of AI coding tools is to automate the thinking, not just the typing. You're supposed to describe a problem and get a solution without understanding the details. That's the labor-saving promise.

I think here the OP introduces a strawman since as many people have pointed out, the labour saving happens in automating menial tasks and no one sane should give up "understanding the details".

> >I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle.

On the contrary. Reading ToyDB[1] source code helped me understand MVCC and Isolation levels. That's knowledge that's valuable for a systems architect since at the end LLMs are just fancy word generators.

[1] https://github.com/erikgrinaker/toydb