←back to thread

421 points briankelly | 2 comments | | HN request time: 0.438s | source
Show context
necovek ◴[] No.43575664[source]
The premise might possibly be true, but as an actually seasoned Python developer, I've taken a look at one file: https://github.com/dx-tooling/platform-problem-monitoring-co...

All of it smells of a (lousy) junior software engineer: from configuring root logger at the top, module level (which relies on module import caching not to be reapplied), over not using a stdlib config file parser and building one themselves, to a raciness in load_json where it's checked for file existence with an if and then carrying on as if the file is certainly there...

In a nutshell, if the rest of it is like this, it simply sucks.

replies(23): >>43575714 #>>43575764 #>>43575953 #>>43576545 #>>43576732 #>>43576977 #>>43577008 #>>43577017 #>>43577193 #>>43577214 #>>43577226 #>>43577314 #>>43577850 #>>43578934 #>>43578952 #>>43578973 #>>43579760 #>>43581498 #>>43582065 #>>43583922 #>>43585046 #>>43585094 #>>43587376 #
rybosome ◴[] No.43575714[source]
Ok - not wrong at all. Now take that feedback and put it in a prompt back to the LLM.

They’re very good at honing bad code into good code with good feedback. And when you can describe good code faster than you can write it - for instance it uses a library you’re not intimately familiar with - this kind of coding can be enormously productive.

replies(5): >>43575812 #>>43575838 #>>43575956 #>>43577317 #>>43578501 #
aunty_helen ◴[] No.43575956[source]
Nah. This isn’t true. Every time you hit enter you’re not just getting a jr dev, you’re getting a randomly selected jr dev.

So, how did I end up with a logging.py, config.py, config in __init__.py and main.py? Well I prompted for it to fix the logging setup to use a specific format.

I use cursor, it can spit out code at an amazing rate and reduced the amount of docs I need to read to get something done. But after its second attempt at something you need to jump in and do it yourself and most likely debug what was written.

replies(1): >>43576372 #
1. skydhash ◴[] No.43576372[source]
Are you reading a whole encyclopedia each time you assigned to a task? The one thing about learning is that it compounds. You get faster the longer you use a specific technology. So unless you use a different platform for each task, I don't think you have to read that much documentation (understanding them is another matter).
replies(1): >>43578374 #
2. achierius ◴[] No.43578374[source]
This is an important distinction though. LLMs don't have any persistent 'state': they have their activations, their context, and that's it. They only know what's pre-trained, and what's in their context. Now, their ability to do in-context learning is impressive, but you're fundamentally still stuck with the deviations and, eventually, forgetting that characterizes these guys -- while a human, while less quick on the uptake, will nevertheless 'bake in' the lessons in a way that LLMs currently cannot.

In some ways this is even more impressive -- every prompt you make, your LLM is in effect re-reading (and re-comprehending) your whole codebase, from scratch!