←back to thread

425 points sfarshid | 1 comments | | HN request time: 0s | source
Show context
bwestergard ◴[] No.45005722[source]
There are always two major results from any software development process: a change in the code and a change in cognition for the people who wrote the code (whether they did so directly or with an LLM).

Python and Typescript are elaborate formal languages that emerged from a lengthy process of development involving thousands of people around the world over many years. They are non-trivially different, and it's neat that we can port a library from one to the other quasi-automatically.

The difficulty, from an economic perspective, is that the "agent" workflow dramatically alters the cognitive demands during the initial development process. It is plain to see that the developers who prompted an LLM to generate this library will not have the same familiarity with the resulting code that they would have had they written it directly.

For some economic purposes, this altering of cognitive effort, and the dramatic diminution of its duration, probably doesn't matter.

But my hunch is that most of the economic value of code is contingent on there being a set of human beings familiar with the code in a manner that requires writing having written it directly.

Denial of this basic reality was an economic problem even before LLMs: how often did churn in a development team result in a codebase that no one could maintain, undermining the long-term prospects of a firm?

replies(7): >>45008527 #>>45008857 #>>45009017 #>>45010970 #>>45011357 #>>45012926 #>>45013799 #
AdieuToLogic ◴[] No.45008857[source]
> But my hunch is that most of the economic value of code is contingent on there being a set of human beings familiar with the code in a manner that requires writing having written it directly.

This reminds me of a software engineering axiom:

  When making software, remember that it is a snapshot of 
  your understanding of the problem.  It states to all, 
  including your future-self, your approach, clarity, and 
  appropriateness of the solution for the problem at hand.
replies(1): >>45011084 #
wiz21c ◴[] No.45011084[source]
Yes! But there's code and code. Not to disrespect anyone, but there is writing a new algorithm, say for optimizing the gradient descent and code to display a simple web form.

The first one is usually short and requires a very deep understanding of one or two profound, new ideas. The second is usually very big and requires a shallow understanding of many not-so-new ideas (which are usually a reflection of the oroganisation that produced the code).

My feeling is that, provided a sufficiently long context window, an LLM will be able to go through the second kind project very easily. It will also be very good at showing that the first kind of project is not so new after all, destroying all people who can't find really new ideas.

In both case, it'll pressure institutions to have less IT specialists...

As someone who trained specifically in computer sciences, I'm a bit scared :-/

replies(3): >>45011798 #>>45014686 #>>45020370 #
1. dimitri-vs ◴[] No.45014686[source]
As someone that has used coding agents extensively for the past year, the problem is they "move fast and break things" a little too well. Turns out that the act of writing code makes you think through your requirements carefully and understand the full scope of the problem you are trying to solve.

It's created the problem that it's a little too easy to ask the AI agent to refactor your backend and migrate to a different platform at any time and have it wipe out months of hard learned business logic that it deems "obsolete".