←back to thread

165 points gdudeman | 1 comments | | HN request time: 0s | source
Show context
amelius ◴[] No.44481831[source]
> I've been building software for the Mac since 2008

Ok, so they knew where Claude went wrong and could correct for it.

replies(2): >>44481884 #>>44481892 #
simonw ◴[] No.44481884[source]
Right: tools like Claude Code amplify existing skills and expertise, they don't replace it.
replies(2): >>44482030 #>>44483160 #
risyachka ◴[] No.44482030[source]
The main issue here is you can’t acquire expertise by using llm to code for you.

So unless you have 15+ yoe better add more reps. You can always switch to llm code assist in a blink, there is no barrier to entry at all.

replies(3): >>44482065 #>>44482329 #>>44482373 #
AnotherGoodName ◴[] No.44482373[source]
I tend to learn via code examples. 20+ years of experience and the llms have taught me about some features of libraries i was using but overlooked.

I think they add to expertise honestly.

replies(1): >>44482441 #
freedomben ◴[] No.44482441[source]
I wonder how common this is. Personally, looking at code examples can be helpful, but the vast majority of my learning comes from doing, failing, trying a different approach, succeeding, rinsing and repeating.

I also haven't had much luck with getting llms to generate useful code. I'm sure part of that is the stack I am using is much less popular (elixir) than many others, but I have tried everything even the new phoenix.new , and it is only about an 80 to 90% solution, but that remaining percentage is full of bugs or terrible design patterns that will absolutely bite in the future. In nearly everything I've tried to do, it introduces bugs and bug hunting those down is worse to me than if I just did the work manually in the first place. I have spent hours trying to coach the AI through a particular task, only to have the end solution need to be thrown away and started from scratch.

Speaking personally, My skills are atrophying the more I use the AI tools. It still feels like a worthwhile trade-off in many situations, but a trade-off it is

replies(1): >>44483037 #
1. cosmic_cheese ◴[] No.44483037{3}[source]
I also have not had much luck with LLMs when it comes to anything with substantial complexity.

Where I’ve found them best is for generating highly focused examples of specific APIs or concepts. They’re much better at that, though hallucinations still show up from time to time.