←back to thread

165 points gdudeman | 1 comments | | HN request time: 0s | source
Show context
pyman ◴[] No.44481864[source]
Two years ago, I saw myself as a really good Python engineer. Now I'm building native mobile apps, desktop apps that talk to Slack, APIs in Go, and full web apps in React, in hours or days!

It feels like I've got superpowers. I love it. I feel productive, fast, creative. But at night, there's this strange feeling of sadness. My profession, my passion, all the things I worked so hard to learn, all the time and sacrifices, a machine can now do most of it. And the companies building these tools are just getting started.

What does this mean for the next generation of engineers? Where's it all heading? Do you feel the same?

replies(14): >>44481903 #>>44481916 #>>44481936 #>>44481940 #>>44481953 #>>44482060 #>>44482216 #>>44482387 #>>44482415 #>>44482445 #>>44482480 #>>44482816 #>>44483165 #>>44486453 #
simonw ◴[] No.44481936[source]
The reason you can use these tools so effectively across native, mobile, Go, React etc is that you can apply most of what you learned about software development as a Python engineer in these new areas.

The thing LLMs replace is the need for understanding all of the trivia required for each platform.

I don't know how to write a for loop in Go (without looking it up), but I can write useful Go code now without spinning back up on Go first.

I still need to conceptually understand for loops, and what Go is, and structured programming, and compilers, and build and test scripts, and all kinds of other base level skills that people without an existing background in programming are completely missing.

I see LLMs as an amplifier and accelerant. I've accumulated a huge amount of fuzzy knowledge across my career - with an LLM I can now apply that fuzzy knowledge to concrete problems in a huge array of languages and platforms.

Previously I would stay in my lane: I was fluent in Python, JavaScript and SQL so I used those to solve every problem because I didn't want to take the time to spin up on the trivia for a new language or platform.

Now? I'll happily use things like Go and Bash and AppleScript and jq and ffmpeg and I'm considering picking up a Swift project.

replies(6): >>44481962 #>>44482198 #>>44482320 #>>44482417 #>>44482464 #>>44483372 #
snoman ◴[] No.44482198[source]
This is difficult to express because I too have enjoyed using an LLM lately and have felt a productivity increase, but I think there is a false sense of security being expressed in your writing and it underlies one of the primary risks I see with LLMs for programming.

With minor exceptions, moving from one language to another isn’t a matter of simple syntax, trivia, or swapping standard libraries. Certainly, expert beginners do espouse that all the time, but languages often have fundamental concepts that they’re built that and need to be understood in order to be effective with them. For example: have you ever seen a team move from Java to Scala, js to Java, or C# to Python - all of which I’ve seen - where the developers didn’t try to understand language they were moving to? Non-fail, they tried to force the concepts that were important to their prior language, onto the new one, to abysmal results.

If you’re writing trivial scripts, or one-off utils, it probably doesn’t build up enough to matter, and feels great, but you don’t know what you don’t know, and you don’t know what to look for. Offloading the understanding of the concepts that are important for a language to an LLM is a recipe for a bad time.

replies(6): >>44482242 #>>44482541 #>>44482633 #>>44482686 #>>44482873 #>>44487315 #
simonw ◴[] No.44482541[source]
> but languages often have fundamental concepts that they’re built that and need to be understood in order to be effective with them

I completely agree. That's another reason I don't feel threatened by non-programmers using LLMs: to actually write useful Go code you need to figure out goroutines, for React you need to understand the state model and hooks, for AppleScript you need to understand how apps expose their features, etc etc etc.

All of these are things you need to figure out, and I would argue they are conceptually more complex than for loops etc.

But... don't need to memorize the details. I find understanding concepts like goroutines to be a very different mental activity to memorizing the syntax for a Go for loop.

I can come back to some Go code a year later and remind myself how goroutines work very quickly, because I'm an experienced software engineer with a wealth of related knowledge about concurrency primitives to help me out.

replies(3): >>44482714 #>>44483033 #>>44483080 #
danw1979 ◴[] No.44483033[source]
What are your thoughts on humans learning from LLM output ?

I’ve been encouraging the new developers I work with to ensure they read the docs and learn the language to ensure the LLM doesn’t become a crutch, but rather a bicycle.

But it occurred to me recently that I probably learned most of what I know from examples of code written by others.

I’m certain my less experienced colleagues are doing the same, but from Claude rather than Stack Overflow…

replies(3): >>44483164 #>>44483396 #>>44483616 #
1. skydhash ◴[] No.44483164[source]
Not Simon, but here is my take.

Code are practical realizations of concepts that exists outside of code. Let's take concurrency as an example. It's something that is common across many domains where 2 (or more) independent actors suddenly need to share a single resource that can't be used by both at the same time. In many cases, it can be resolved (or not) in an ad-hoc manner. And sometimes there's a basic agreement that takes place.

But for computers, you need the solution to be precise, eliminating all error cases. So we go one to define primitives and how these interacts with each other and the sequence of the actions. These are still not code. Once we are precise enough, we translate it to code.

So if you're learning from code, you are tracing back to the original concept. And we can do so because we are good at grasping patterns.

But generated code are often a factor of several patterns at once, and some are not even relevant. Just that in the training phases, the algorithm detected similarity (and we know things can be similar but are actually very different).

So I'm wary of learning from LLMs, because it's a land of mirages.