Most active commenters
  • diggan(7)
  • abdullin(5)
  • koakuma-chan(4)

←back to thread

1479 points sandslash | 52 comments | | HN request time: 1.577s | source | bottom
Show context
abdullin ◴[] No.44316210[source]
Tight feedback loops are the key in working productively with software. I see that in codebases up to 700k lines of code (legacy 30yo 4GL ERP systems).

The best part is that AI-driven systems are fine with running even more tight loops than what a sane human would tolerate.

Eg. running full linting, testing and E2E/simulation suite after any minor change. Or generating 4 versions of PR for the same task so that the human could just pick the best one.

replies(7): >>44316306 #>>44316946 #>>44317531 #>>44317792 #>>44318080 #>>44318246 #>>44318794 #
1. latexr ◴[] No.44317792[source]
> Or generating 4 versions of PR for the same task so that the human could just pick the best one.

That sounds awful. A truly terrible and demotivating way to work and produce anything of real quality. Why are we doing this to ourselves and embracing it?

A few years ago, it would have been seen as a joke to say “the future of software development will be to have a million monkey interns banging on one million keyboards and submit a million PRs, then choose one”. Today, it’s lauded as a brilliant business and cost-saving idea.

We’re beyond doomed. The first major catastrophe caused by sloppy AI code can’t come soon enough. The sooner it happens, the better chance we have to self-correct.

replies(6): >>44317876 #>>44317884 #>>44317997 #>>44318175 #>>44318235 #>>44318625 #
2. bonoboTP ◴[] No.44317876[source]
If it's monkeylike quality and you need a million tries, it's shit. It you need four tries and one of those is top-tier professional programmer quality, then it's good.
replies(4): >>44317938 #>>44317975 #>>44318876 #>>44319399 #
3. diggan ◴[] No.44317884[source]
> A truly terrible and demotivating way to work and produce anything of real quality

You clearly have strong feelings about it, which is fine, but it would be much more interesting to know exactly why it would terrible and demotivating, and why it cannot produce anything of quality? And what is "real quality" and does that mean "fake quality" exists?

> million monkey interns banging on one million keyboards and submit a million PRs

I'm not sure if you misunderstand LLMs, or the famous "monkeys writing Shakespeare" part, but that example is more about randomness and infinity than about probabilistic machines somewhat working towards a goal with some non-determinism.

> We’re beyond doomed

The good news is that we've been doomed for a long time, yet we persist. If you take a look at how the internet is basically held up by duct-tape at this point, I think you'd feel slightly more comfortable with how crap absolutely everything is. Like 1% of software is actually Good Software while the rest barely works on a good day.

replies(2): >>44317983 #>>44318020 #
4. agos ◴[] No.44317938[source]
if the thing producing the four PRs can't distinguish the top tier one, I have strong doubts that it can even produce it
replies(1): >>44319323 #
5. ◴[] No.44317975[source]
6. 3dsnano ◴[] No.44317983[source]
> And what is "real quality" and does that mean "fake quality" exists?

I think there is no real quality or fake quality, just quality. I am referencing the quality that Persig and C. Alexander have written about.

It’s… qualitative, so it’s hard to measure but easy to feel. Humans are really good at perceiving it then making objective decisions. LLMs don’t know what it is (they’ve heard about it and think they know).

replies(2): >>44318438 #>>44319060 #
7. koakuma-chan ◴[] No.44317997[source]
> That sounds awful. A truly terrible and demotivating way to work and produce anything of real quality

This is the right way to work with generative AI, and it already is an extremely common and established practice when working with image generation.

replies(3): >>44318041 #>>44318110 #>>44318310 #
8. bgwalter ◴[] No.44318020[source]
If "AI" worked (which fortunately isn't the case), humans would be degraded to passive consumers in the last domain in which they were active creators: thinking.

Moreover, you would have to pay centralized corporations that stole all of humanity's intellectual output for engaging in your profession. That is terrifying.

The current reality is also terrifying: Mediocre developers are enabled to have a 10x volume (not quality). Mediocre execs like that and force everyone to use the "AI" snakeoil. The profession becomes even more bureaucratic, tool oriented and soulless.

People without a soul may not mind.

replies(1): >>44319044 #
9. notTooFarGone ◴[] No.44318041[source]
I can recognize images in one look.

How about that 400 Line change that touches 7 files?

replies(3): >>44318098 #>>44318227 #>>44318814 #
10. koakuma-chan ◴[] No.44318098{3}[source]
In my prompt I ask the LLM to write a short summary of how it solved the problem, run multiple instances of LLM concurrently, compare their summaries, and use the output of whichever LLM seems to have interpreted instructions the best, or arrived at the best solution.
replies(1): >>44318584 #
11. deadbabe ◴[] No.44318110[source]
It is not. The right way to work with generative AI is to get the right answer in the first shot. But it's the AI that is not living up to this promise.

Reviewing 4 different versions of AI code is grossly unproductive. A human co-worker can submit one version of code and usually have it accepted with a single review, no other "versions" to verify. 4 versions means you're reading 75% more code than is necessary. Multiply this across every change ever made to a code base, and you're wasting a shitload of time.

replies(2): >>44318128 #>>44318662 #
12. koakuma-chan ◴[] No.44318128{3}[source]
> Reviewing 4 different versions of AI code is grossly unproductive.

You can have another AI do that for you. I review manually for now though (summaries, not the code, as I said in another message).

13. osigurdson ◴[] No.44318175[source]
I'm not sure that AI code has to be sloppy. I've had some success with hand coding some examples and then asking codex to rigorously adhere to prior conventions. This can end up with very self consistent code.

Agree though on the "pick the best PR" workflow. This is pure model training work and you should be compensated for it.

replies(1): >>44318275 #
14. abdullin ◴[] No.44318227{3}[source]
Exactly!

This is why there has to be "write me a detailed implementation plan" step in between. Which files is it going to change, how, what are the gotchas, which tests will be affected or added etc.

It is easier to review one document and point out missing bits, than chase the loose ends.

Once the plan is done and good, it is usually a smooth path to the PR.

replies(1): >>44318795 #
15. ponector ◴[] No.44318235[source]
>That sounds awful.

Not for the cloud provider. AWS bill to the moon!

16. elif ◴[] No.44318275[source]
Yep this is what Andrej talks about around 20 minutes into this talk.

You have to be extremely verbose in describing all of your requirements. There is seemingly no such thing as too much detail. The second you start being vague, even if it WOULD be clear to a person with common sense, the LLM views that vagueness as a potential aspect of it's own creative liberty.

replies(6): >>44318409 #>>44318439 #>>44318599 #>>44318670 #>>44319080 #>>44323353 #
17. xphos ◴[] No.44318310[source]
"If the only tool you have is a hammer, you tend to see every problem as a nail."

I think the worlds leaning dangerously into LLMs expecting them to solve every problem under the sun. Sure AI can solve problems but I think that domain 1 they Karpathy shows if it is the body of new knowledge in the world doesn't grow with LLMs and agents maybe generation and selection is the best method for working with domain 2/3 but there is something fundamentally lost in the rapid embrace of these AI tools.

A true challenge question for people is would you give up 10 points of IQ for access to the next gen AI model? I don't ask this in the sense that AI makes people stupid but rather that it frames the value of intelligence is that you have it. Rather than, in how you can look up or generate an answer that may or may not be correct quickly. How we use our tools deeply shapes what we will do in the future. A cautionary tale is US manufacturing of precision tools where we give up on teaching people how to use Lathes, because they could simply run CNC machines instead. Now that industry has an extreme lack of programmers for CNC machines, making it impossible to keep up with other precision instrument producing countries. This of course is a normative statement and has more complex variables but I fear in this dead set charge for AI we will lose sight of what makes programming languages and programming in general valuable

18. jebarker ◴[] No.44318409{3}[source]
> the LLM views that vagueness as a potential aspect of it's own creative liberty.

I think that anthropomorphism actually clouds what’s going on here. There’s no creative choice inside an LLM. More description in the prompt just means more constraints on the latent space. You still have no certainty whether the LLM models the particular part of the world you’re constraining it to in the way you hope it does though.

19. abdullin ◴[] No.44318438{3}[source]
It is actually funny that current AI+Coding tools benefit a lot from domain context and other information along the lines of Domain-Driven Design (which was inspired by the pattern language of C. Alexander).

A few teams have started incorporating `CONTEXT.MD` into module descriptions to leverage this.

20. 9rx ◴[] No.44318439{3}[source]
> You have to be extremely verbose in describing all of your requirements. There is seemingly no such thing as too much detail.

If only there was a language one could use that enables describing all of your requirements in a unambiguous manner, ensuring that you have provided all the necessary detail.

Oh wait.

21. elt895 ◴[] No.44318584{4}[source]
And you trust that the summary matches what was actually done? Your experience with the level of LLMs understanding of code changes must significantly differ from mine.
replies(1): >>44318628 #
22. joshuahedlund ◴[] No.44318599{3}[source]
> You have to be extremely verbose in describing all of your requirements. There is seemingly no such thing as too much detail

I understand YMMV, but I have yet to find a use case where this takes me less time than writing the code myself.

23. chamomeal ◴[] No.44318625[source]
I say this all the time!

Does anybody really want to be an assembly line QA reviewer for an automated code factory? Sounds like shit.

Also I can’t really imagine that in the first place. At my current job, each task is like 95% understanding all the little bits, and then 5% writing the code. If you’re reviewing PRs from a bot all day, you’ll still need to understand all the bits before you accept it. So how much time is that really gonna save?

replies(1): >>44319089 #
24. koakuma-chan ◴[] No.44318628{5}[source]
It matched every time so far.
25. RHSeeger ◴[] No.44318662{3}[source]
That's not really comparing apples to apples though.

> A human co-worker can submit one version of code and usually have it accepted with a single review, no other "versions" to verify.

But that human co-worker spent a lot of time generating what is being reviewed. You're trading "time saved coding" for "more time reviewing". You can't complain about the added time reviewing and then ignore all the time saved coding. THat's not to say it's necessarily a win, but it _is_ a tradeoff.

Plus that co-worker may very well have spent some time discussing various approaches to the problem (with you), with is somewhat parallel to the idea of reviewing 4 different PRs.

26. SirMaster ◴[] No.44318670{3}[source]
I'm really waiting for AI to get on par with the common sense of most humans in their respective fields.
replies(1): >>44318737 #
27. diggan ◴[] No.44318737{4}[source]
I think you'll be waiting for a very long time. Right now we have programmable LLMs, so if you're not getting the results, you need to reprogram it to give the results you want.
28. bayindirh ◴[] No.44318795{4}[source]
So you can create a more buggy code remixed from scraped bits from the internet which you don't understand, but somehow works rather than creating a higher quality, tighter code which takes the same amount of time to type? All the while offloading all the work to something else so your skills can atrophy at the same time?

Sounds like progress to me.

replies(1): >>44322806 #
29. mistersquid ◴[] No.44318814{3}[source]
> I can recognize images in one look.

> How about that 400 Line change that touches 7 files?

Karpathy discusses this discrepancy. In his estimation LLMs currently do not have a UI comparable to 1970s CLI. Today, LLMs output text and text does not leverage the human brain’s ability to ingest visually coded information, literally, at a glance.

Karpathy surmises UIs for LLMs are coming and I suspect he’s correct.

replies(1): >>44319905 #
30. layer8 ◴[] No.44318876[source]
The problem is, for any change, you have to understand the existing code base to assess the quality of the change in the four tries. This means, you aren’t relieved from being familiar with the code and reviewing everything. For many developers this review-only work style isn’t an exciting prospect.

And it will remain that way until you can delegate development tasks to AI with a 99+% success rate so that you don’t have to review their output and understand the code base anymore. At which point developers will become truly obsolete.

31. diggan ◴[] No.44319044{3}[source]
> If "AI" worked (which fortunately isn't the case), humans would be degraded to passive consumers in the last domain in which they were active creators: thinking.

"AI" (depending on what you understand that to be) is already "working" for many, including myself. I've basically stopped using Google because of it.

> humans would be degraded to passive consumers in the last domain in which they were active creators: thinking

Why? I still think (I think at least), why would I stop thinking just because I have yet another tool in my toolbox?

> you would have to pay centralized corporations that stole all of humanity's intellectual output for engaging in your profession

Assuming we'll forever be stuck in the "mainframe" phase, then yeah. I agree that local models aren't really close to SOTA yet, but the ones you can run locally can already be useful in a couple of focused use cases, and judging by the speed of improvements, we won't always be stuck in this mainframe-phase.

> Mediocre developers are enabled to have a 10x volume (not quality).

In my experience, which admittedly been mostly in startups and smaller companies, this has always been the case. Most developers seem to like to produce MORE code over BETTER code, I'm not sure why that is, but I don't think LLMs will change people's mind about this, in either direction. Shitty developers will be shit, with or without LLMs.

replies(1): >>44322276 #
32. diggan ◴[] No.44319060{3}[source]
> LLMs don’t know what it is

Of course they don't, they're probability/prediction machines, they don't "know" anything, not even that Paris is the capital of France. What they do "know" is that once someone writes "The capital of France is", the most likely tokens to come after that, is "Paris". But they don't understand the concept, nor anything else, just that probably 54123 comes after 6723 (or whatever the tokens are).

Once you understand this, I think it's easy to reason about why they don't understand code quality, why they couldn't ever understand it, and how you can make them output quality code regardless.

33. pja ◴[] No.44319080{3}[source]
> You have to be extremely verbose in describing all of your requirements. There is seemingly no such thing as too much detail.

Sounds like ... programming.

Program specification is programming, ultimately. For any given problem if you’re lucky the specification is concise & uniquely defines the required program. If you’re unlucky the spec ends up longer than the code you’d write to implement it, because the language you’re writing it in is less suited to the problem domain than the actual code.

replies(1): >>44323942 #
34. diggan ◴[] No.44319089[source]
> Does anybody really want to be an assembly line QA reviewer for an automated code factory? Sounds like shit.

On the other hand, does anyone really wanna be a code-monkey implementing CRUD applications over and over by following product specifications by "product managers" that barely seem to understand the product they're "managing"?

See, we can make bad faith arguments both ways, but what's the point?

replies(2): >>44319854 #>>44323231 #
35. solaire_oa ◴[] No.44319323{3}[source]
Making 4 PRs for a well-known solution sounds insane, yes, but to be the devil's advocate, you could plausibly be working with an ambiguous task: "Create 4 PRs with 4 different dependency libraries, so that I can compare their implementations." Technically it wouldn't need to pick the best one.

I have apprehension about the future of software engineering, but comparison does technically seem like a valid use case.

36. solaire_oa ◴[] No.44319399[source]
Top-tier professional programmer quality is exceedingly, impractically optimistic, for a few reasons.

1. There's a low probability of that in the first place.

2. You need to be a top-tier professional programmer to recognize that type of quality (i.e. a junior engineer could select one of the 3 shit PRs)

3. When it doesn't produce TTPPQ, you wasted tons of time prompting and reviewing shit code and still need to deliver, net negative.

I'm not doubting the utility of LLMs but the scattershot approach just feels like gambling to me.

replies(1): >>44320025 #
37. nevertoolate ◴[] No.44319854{3}[source]
Issue is if product people will do the “coding” and you have to fix it is miserable
replies(1): >>44320383 #
38. variadix ◴[] No.44319905{4}[source]
The thing required isn’t a GUI for LLMs, it’s a visual model of code that captures all the behavior and is a useful representation to a human. People have floated this idea before LLMs, but as far as I know there isn’t any real progress, probably because it isn’t feasible. There’s so much intricacy and detail in software (and getting it even slightly wrong can be catastrophic), any representation that can capture said detail isn’t going to be interpretable at a glance.
replies(2): >>44320927 #>>44322430 #
39. zelphirkalt ◴[] No.44320025{3}[source]
Also as a consequence of (1) the LLMs are trained on mediocre code mostly, so they often output mediocre or bad solutions.
40. diggan ◴[] No.44320383{4}[source]
Even worse would be if we asked the accountants to do the coding, then you'll learn what miserable means.

What was the point again?

replies(1): >>44320556 #
41. nevertoolate ◴[] No.44320556{5}[source]
Yes
42. mistersquid ◴[] No.44320927{5}[source]
> The thing required isn’t a GUI for LLMs, it’s a visual model of code that captures all the behavior and is a useful representation to a human.

The visual representation that would be useful to humans is what Karpathy means by “GUI for LLMs”.

43. zelphirkalt ◴[] No.44322276{4}[source]
The AI as it is currently, will not come up with that new app idea or that clever innovative way of implementing an application. It will endlessly rehash the training data it has ingested. Sure, you can tell an AI to spit out a CRUD, and maybe it will even eventually work in some sane way, but that's not innovative and not necessarily a good software. It is blindly copying existing approaches to implement something. That something is then maybe even working, but lacks any special sauce to make it special.

Example: I am currently building a web app. My goal is to keep it entirely static, traditional template rendering, just using the web as a GUI framework. If I had just told the AI to build this, it would have thrown tons of JS at the problem, because that is what the mainstream does these days, and what it mostly saw as training data. Then my back button would most likely no longer work, I would not be able to use bookmarks properly, it would not automatically have an API as powerful as the web UI, usable from any script, and the whole thing would have gone to shit.

If the AI tools were as good as I am at what I am doing, and I relied upon that, then I would not have spent time trying to think of the principles of my app, as I did when coming up with it myself. As it is now, the AI would not even have managed to prevent duplicate results from showing up in the UI, because I had a GPT4 session about how to prevent that, and none of the suggested AI answers worked and in the end I did what I thought I might have to do when I first discovered the issue.

replies(1): >>44322973 #
44. skydhash ◴[] No.44322430{5}[source]
There’s no visual model for code as code isn’t 2d. There’s 2 mechanism in the turing machine model: a state machine and a linear representation of code and data. The 2d representation of state machine has no significance and the linear aspect of code and data is hiding more dimensions. We invented more abstractions, but nothing that map to a visual representation.
45. abdullin ◴[] No.44322806{5}[source]
Here is another way to look at the problem.

There is a team of 5 people that are passionate about their indigenous language and want to preserve it from disappearing. They are using AI+Coding tools to:

(1) Process and prepare a ton of various datasets for training custom text-to-speech, speech-to-text models and wake word models (because foundational models don't know this language), along with the pipelines and tooling for the contributors.

(2) design and develop an embedded device (running ESP32-S3) to act as a smart speaker running on the edge

(3) design and develop backend in golang to orchestrate hundreds of these speakers

(4) a whole bunch of Python agents (essentially glorified RAGs over folklore, stories)

(5) a set of websites for teachers to create course content and exercises, making them available to these edge devices

All that, just so that kids in a few hundred kindergartens and schools would be able to practice their own native language, listen to fairy tales, songs or ask questions.

This project was acknowledged by the UN (AI for Good programme). They are now extending their help to more disappearing languages.

None of that was possible before. This sounds like a good progress to me.

Edit: added newlines.

replies(1): >>44325990 #
46. diggan ◴[] No.44322973{5}[source]
> The AI as it is currently, will not come up with that new app idea or that clever innovative way of implementing an application

Who has claimed that they can do that sort of stuff? I don't think my comment hints at that, nor does the talk in the submission.

You're absolutely right with most of your comment, and seem to just be rehashing what Karpathy talks about but with different words. Of course it won't create good software unless you specify exactly what "good software" is for you, and tell it that. Of course it won't know you want "traditional static template rendering" unless you tell it to. Of course it won't create a API you can use from anywhere unless you say so. Of course it'll follow what's in the training data. Of course things won't automatically implement whatever you imagine your project should have, unless you tell it about those features.

I'm not sure if you're just expanding on the talk but chose my previous comment to attach it to, or if you're replying to something I said in my comment.

47. consumer451 ◴[] No.44323231{3}[source]
I hesitate to divide a group as diverse as software devs into two categories, but here I go:

I have a feeling that devs who love LLM coding tools are more product-driven than those who hate them.

Put another way, maybe devs with their own product ideas love LLM coding tools, whilr devs without them do not.

I am genuinely not trying to throw shade here in any way. Does this rough division ring true to anyone else? Is there any better way to put it?

replies(1): >>44427318 #
48. throw234234234 ◴[] No.44323353{3}[source]
I've found myself personally thinking English is OK when I'm happy with a "lossy expansion" and don't need every single detail defined (i.e. the tedious boilerplate, or templating kind of code). After all to me an LLM can be seen as a lossy compression of actual detailed examples of working code - why not "uncompress it" and let it assume the gaps. As an example I want a UI to render some data but I'm not as fussed about the details of it, I don't want to specify exact co-ordinates of each button, etc

However when I want detailed changes I find it more troublesome at present than just typing in the code myself. i.e. I know exactly what I want and I can express it just as easily (sometimes easier) in code.

I find AI in some ways a generic DSL personally. The more I have to define, the more specific I have to be the more I start to evaluate code or DSL's as potentially more appropriate tools especially when the details DO matter for quality/acceptance.

49. longhaul ◴[] No.44323942{4}[source]
Agree, I used to say that documenting a program precisely and comprehensively ends up being code. We either need a DSL that can specify at a higher level or use domain specific LLMs.
50. bayindirh ◴[] No.44325990{6}[source]
What you are describing is another application. My comment was squarely aimed at "vibe coding".

Protecting and preserving dying languages and culture is a great application for natural language processing.

For the record, I'm neither against LLMs, nor AI. What I'm primarily against is, how LLMs are trained and use the internet via their agents, without giving any citations, and stripping this information left and right and cry "fair use!" in the process.

Also, Go and Python are a nice languages (which I use), but there are other nice ways to build agents which also allows them to migrate, communicate and work in other cooperative or competitive ways.

So, AI is nice, LLMs are cool, but hyping something to earn money, deskill people, and pointing to something which is ethically questionable and technically inferior as the only silver bullet is not.

IOW; We should handle this thing way more carefully and stop ripping people's work in the name of "fair use" without consent. This is nuts.

Disclosure: I'm a HPC sysadmin sitting on top of a datacenter which runs some AI workloads, too.

replies(1): >>44336695 #
51. abdullin ◴[] No.44336695{7}[source]
I think there are two different layers that get frequently mixed.

(1) LLMs as models - just the weights and an inference engine. These are just tools like hammers. There is a wide variety of models, starting from transparent and useless IBM Granite models, to open-weights Llama/Qwen to proprietary.

(2) AI products that are built on top of LLMs (agents, RAG, search, reasoning etc). This is how people decide to use LLMs.

How these products display results - with or without citations, with or without attribution - is determined by the product design.

It takes more effort to design a system that properly attributes all bits of information to the sources, but it is doable. As long as product teams are willing to invest that effort.

52. chamomeal ◴[] No.44427318{4}[source]
No I think that’s accurate! But maybe instead of “devs who think about product stuff vs devs who don’t”, it depends on what hat you’re wearing.

When I’m working on something that I just want it to work, I love using LLMs. Shell functions for me to stuff into my config and use without ever understanding, UI for side projects that I don’t particularly care about, boilerplate nestjs config crap. Anything where all I care about is the result, not the process or the extensibility of the code: I love LLMs for that stuff.

When it’s something that I’m going to continue working on for a while, or the whole point is the extensibility/cleanliness of the program, I don’t like to use LLMs nearly as much.

I think it might be because most codebases are built with two purposes: 1) to be used as a product 2) to be extended and turned into something else

LLMs are super good at the first purpose, but not so good at the second.

I heard an interesting interview on the playdate dev podcast by the guy who made Obra Dinn. He said something along the lines of “making a game is awesome because the code can be horrible. All that matters is that the game works and is fun, and then you are done. It can just be finished, and then the code quality doesn’t matter anymore.”

So maybe LLMs are just really good for when you need something specific to work, and the internals don’t matter too much. Which are more the values of a product manager than a developer.

So it makes sense that when you are thinking more product-oriented, LLMs are more appealing!