Most active commenters
  • alganet(7)
  • fc417fc802(4)
  • amake(4)
  • ben_w(4)
  • latexr(4)

←back to thread

490 points todsacerdoti | 35 comments | | HN request time: 1.499s | source | bottom
Show context
benlivengood ◴[] No.44383064[source]
Open source and libre/free software are particularly vulnerable to a future where AI-generated code is ruled to be either infringing or public domain.

In the former case, disentangling AI-edits from human edits could tie a project up in legal proceedings for years and projects don't have any funding to fight a copyright suit. Specifically, code that is AI-generated and subsequently modified or incorporated in the rest of the code would raise the question of whether subsequent human edits were non-fair-use derivative works.

In the latter case the license restrictions no longer apply to portions of the codebase raising similar issues from derived code; a project that is only 98% OSS/FS licensed suddenly has much less leverage in takedowns to companies abusing the license terms; having to prove that infringers are definitely using the human-generated and licensed code.

Proprietary software is only mildly harmed in either case; it would require speculative copyright owners to disassemble their binaries and try to make the case that AI-generated code infringed without being able to see the codebase itself. And plenty of proprietary software has public domain code in it already.

replies(8): >>44383156 #>>44383218 #>>44383229 #>>44384184 #>>44385081 #>>44385229 #>>44386155 #>>44387156 #
AJ007 ◴[] No.44383229[source]
I understand what experienced developers don't want random AI contributions from no-knowledge "developers" contributing to a project. In any situation, if a human is review AI code line by line that would tie up humans for years, even ignoring anything legally.

#1 There will be no verifiable way to prove something was AI generated beyond early models.

#2 Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects. The only room for debate on that is an apocalypse level scenario where humans fail to continue producing semiconductors or electricity.

#3 If a project successfully excludes AI contributions (not clear how other than controlling contributions to a tight group of anti-AI fanatics), it's just going to be cloned, and the clones will leave it in the dust. If the license permits forking then it could be forked too, but cloning and purging any potential legal issues might be preferred.

There still is a path for open source projects. It will be different. There's going to be much, much more software in the future and it's not going to be all junk (although 99% might.)

replies(16): >>44383277 #>>44383278 #>>44383309 #>>44383367 #>>44383381 #>>44383421 #>>44383553 #>>44383615 #>>44383810 #>>44384306 #>>44384448 #>>44384472 #>>44385173 #>>44386408 #>>44387925 #>>44389059 #
amake ◴[] No.44383278[source]
> #2 Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects

Still waiting to see evidence of AI-driven projects eating the lunch of "traditional" projects.

replies(4): >>44383368 #>>44383382 #>>44383858 #>>44386542 #
viraptor ◴[] No.44383368[source]
It's happening slowly all around. It's not obvious because people producing high quality stuff have no incentive at all to mark their changes as AI-generated. But there are also local tools generated faster than you could adjust existing tools to do what you want. I'm running 3 things now just for myself that I generated from scratch instead of trying to send feature requests to existing apps I can buy.

It's only going to get more pervasive from now on.

replies(2): >>44383499 #>>44384560 #
alganet ◴[] No.44383499[source]
Can you show these 3 things to us?
replies(4): >>44383630 #>>44383710 #>>44383844 #>>44384062 #
1. WD-42 ◴[] No.44383630[source]
For some reason these fully functional ai generated projects that the authors vibe out while playing guitar and clipping their toenails are never open source.
replies(6): >>44383999 #>>44384026 #>>44384847 #>>44385049 #>>44386161 #>>44387603 #
2. dcow ◴[] No.44383999[source]
Except this one is (see your sibling).
3. fc417fc802 ◴[] No.44384026[source]
> the authors vibe out while playing guitar and clipping their toenails

I don't think anyone is claiming that. If you submit changes to a FOSS project and an LLM assisted you in writing them how would anyone know? Assuming at least that you are an otherwise competent developer and that you carefully review all code before you commit it.

The (admittedly still controversial) claim being made is that developers with LLM assistance are more productive than those without. Further, that there is little incentive for such developers to advertise this assistance. Less trouble for all involved to represent it as 100% your own unassisted work.

replies(2): >>44384302 #>>44387261 #
4. EGreg ◴[] No.44384302[source]
Why would you need to carefully review code? That is so 2024. You’re bottlenecking the process and are at a disadvantage when the AI could be working 24/7. We have AI agents that have been trained to review thousands of PRs that are produced by other, generative agents, and together they have already churned out much more software than human teams can write in a year.

AI “assistance” is a short intermediate phase, like the “centaurs” that Garry Kasparov was very fond of (human + computer beat both a human and a computer by itself… until the computer-only became better).

https://en.wikipedia.org/wiki/Advanced_chess

replies(1): >>44384547 #
5. amake ◴[] No.44384547{3}[source]
> We have AI agents that have been trained to review thousands of PRs that are produced by other, generative agents, and together they have already churned out much more software than human teams can write in a year.

Was your comment tongue-in-cheek? If not, where is this huge mass of AI-generated software?

replies(1): >>44384719 #
6. rvnx ◴[] No.44384719{4}[source]
All around you, just that it doesn’t make sense for developers to reveal that a lot of their work is now about chunking and refining the specifications written by the product owner.

Admitting such is like admitting you are overpaid for your job, and that a 20 USD AI-agent can do better and faster than you for 75% of the work.

Is it easy to admit that you have learnt skills for 10+ years that are progressively already getting replaced by a machine ? (like thousands of jobs in the past).

More and more, developer is going to be a monkey job where your only task is to make sure there is enough coal in the steam machine.

Compilers destroyed the jobs of developers writing assembler code, they had to adapt. They insisted that hand-written assembler was better.

Here is the same, except you write code in natural language. It may not be optimal in all situations but it often gets the job done.

replies(3): >>44384964 #>>44385243 #>>44385508 #
7. bredren ◴[] No.44384847[source]
Mine is. And it is awesome: https://github.com/banagale/FileKitty

The most recent release includes a MacOS build in a dmg signed by Apple: https://github.com/banagale/FileKitty/releases/tag/v0.2.3

I vibed that workflow just so more people could have access to this tool. It was a pain and it actually took time away from toenail clipping.

And while I didn't lay hands on a guitar much during this period, I did manage to build this while bouncing between playing Civil War tunes on a 3D-printed violin and generating music in Suno for a soundtrack to “Back on That Crust,” the missing and one true spiritual successor to ToeJam & Earl: https://suno.com/song/e5b6dc04-ffab-4310-b9ef-815bdf742ecb

replies(1): >>44385810 #
8. bonzini ◴[] No.44384964{5}[source]
Good luck debugging
9. TeMPOraL ◴[] No.44385049[source]
Going by the standard of "But there are also local tools generated faster than you could adjust existing tools to do what you want", here's a random one of mine that's in regular use by my wife:

https://github.com/TeMPOraL/qr-code-generator

Built with Aider and either Sonnet 3.5 or Gemini 2.5 Pro (I forgot to note that down in this project), and recently modified with Claude Code because I had to test it on something.

Getting the first version of this up was literally both faster and easier than finding a QR code generator that I'm sure is not bloated, not bullshit, not loaded with trackers, that's not using shorteners or its own URL (it's always a stupid idea to use URL shorteners you don't control), not showing ads, mining bitcoin and shit, one that my wife can use in her workflow without being distracted too much. Static page, domain I own, a bit of fiddling with LLMs.

What I can't link to is half a dozen single-use tools or faux tools created on the fly as part of working on something. But this happens to me couple times a month.

To anchor another vertex in this parameter space, I found it easier and faster to ask LLM to build me a "breathing timer" (one that counts down N seconds and resets, repeatedly) with analog indicator by requesting it, because a search query to Google/Kagi would be of comparable length, and then I'd have to click on results!

EDIT: Okay, another example:

https://github.com/TeMPOraL/tampermonkey-scripts/blob/master...

It overlays a trivial UI to set up looping over a segment of any YouTube video, and automatically persists the setting by video ID. It solves the trivial annoyance of channel jingles and other bullshit at start/end of videos that I use repeatedly as background music.

This was mostly done zero-shot by Claude, with maybe two or three requests for corrections/extra features, total development time maybe 15 minutes. I use it every day all the time ever since.

You could say, "but SponsorBlock" or whatever, but per what GP wrote, I just needed a small fraction of functionality of the tools I know exist, and it was trivial to generate that with AI.

replies(1): >>44385640 #
10. amake ◴[] No.44385243{5}[source]
> All around you, just that it doesn’t make sense for developers to reveal that

OK, but I asked for evidence and people just keep not providing any.

"God is all around you; he just works in mysterious ways"

OK, good luck with that.

replies(2): >>44386914 #>>44388626 #
11. alganet ◴[] No.44385508{5}[source]
I have a complete proof that P=NP but it doesn't make sense to reveal to the world that now I'm god. It would crush their little hearts.
replies(1): >>44386572 #
12. alganet ◴[] No.44385640[source]
Your QR generator is actually a project written by humans repackaged:

https://github.com/neocotic/qrious

All the hard work was made by humans.

I can do `npm install` without having to pay for AI, thanks.

replies(1): >>44386366 #
13. fingerlocks ◴[] No.44385810[source]
This app is concatenating files with an extra line of metadata added? You know this could be done in a few lines of shell script? You can then make it a finder action extension so it’s part of the system file manager app.
replies(2): >>44386131 #>>44387628 #
14. pwm ◴[] No.44386131{3}[source]
Sic transit gloria mundi
15. Philpax ◴[] No.44386161[source]
Here's Armin Ronacher describing his open-source "sloppy XML" parser that he had AI write with his guidance from this week: https://lucumr.pocoo.org/2025/6/21/my-first-ai-library/
replies(1): >>44387171 #
16. ben_w ◴[] No.44386366{3}[source]
I am reminded of a meme about musicians. Not well enough to find it, but it was something like this:

  Real musicians don’t mix loops they bought.
  Real musicians make their own synth patches.
  Real musicians build their own instruments.
  Real musicians hand-forge every metal component in their instruments.
  …
  They say real musicians raise goats for the leather for the drum-skins, but I wouldn't know because I haven’t made any music in months and the goats smell funny.
There's two points here:

1) even though most of people on here know what npm is, many of us are not web developers and don't really know how to turn a random package into a useful webapp.

2) The AI is faster than googling a finished product that already exists, not just as an NPM package, but as a complete website.

Especially because search results require you to go through all the popups everyone stuffs everywhere because cookies, ads, before you even find out if it was actually a scam where the website you went to first doesn't actually do the right thing (or perhaps *anything*) anyway.

It is also, for many of us, the same price: free.

replies(2): >>44387121 #>>44387137 #
17. ben_w ◴[] No.44386572{6}[source]
P = NP is less "crush their little hearts", more "may cause widespread heart attacks across every industry due to cryptography failing, depending on if the polynomial exponent is small enough".
replies(1): >>44386766 #
18. Dylan16807 ◴[] No.44386766{7}[source]
A very very big if.

Also a sufficiently good exponential solver would do the same thing.

19. rvnx ◴[] No.44386914{6}[source]
Billions of people believe in god(s). In fact, 75 to 85% of the world population, btw.
replies(3): >>44386983 #>>44387302 #>>44388915 #
20. amake ◴[] No.44386983{7}[source]
And?
replies(1): >>44389395 #
21. latexr ◴[] No.44387121{4}[source]
> I am reminded of a meme about musicians. Not well enough to find it

You only need to search for “loops goat skin”. You’re butchering the quote and its meaning quite a bit. The widely circulated version is:

> I thought using loops was cheating, so I programmed my own using samples. I then thought using samples was cheating, so I recorded real drums. I then thought that programming it was cheating, so I learned to play drums for real. I then thought using bought drums was cheating, so I learned to make my own. I then thought using premade skins was cheating, so I killed a goat and skinned it. I then thought that that was cheating too, so I grew my own goat from a baby goat. I also think that is cheating, but I’m not sure where to go from here. I haven’t made any music lately, what with the goat farming and all.

It’s not about “real musicians”¹ but a personal reflection on dependencies and abstractions and the nature of creative work and remixing. Your interpretation of it is backwards.

¹ https://en.wikipedia.org/wiki/No_true_Scotsman

22. alganet ◴[] No.44387137{4}[source]
Ice Ice Baby getting the bass riff of Under Pressure is sampling. Making a cover is covering. Milli Vanilli is another completely different situation.

I am sorry, none of your points are made. Makes no sense.

The LLM work sounds dumb, and the suggestion that it made "a qr code generator" is disingenuous. The LLM barely did a frontend for it. Barely.

Regarding the "free" price, read the comment I replied on again:

> Built with Aider and either Sonnet 3.5 or Gemini 2.5 Pro

Paid tools.

It sounds like the author payed for `npm install`, and thinks he's on top of things and being smart.

replies(1): >>44390587 #
23. latexr ◴[] No.44387171[source]
> To be clear: this isn't an endorsement of using models for serious Open Source libraries. This was an experiment to see how far I could get with minimal manual effort, and to unstick myself from an annoying blocker. The result is good enough for my immediate use case and I also felt good enough to publish it to PyPI in case someone else has the same problem.

By their own admission, this is just kind of OK. They don’t even know how good or bad it is, just that it kind of solved an immediate problem. That’s not how you create sustainable and reliable software. Which is OK, sometimes you just need to crap something out to do a quick job, but that doesn’t really feel like what your parent comment is talking about.

24. latexr ◴[] No.44387261[source]
> Assuming at least that you are an otherwise competent developer and that you carefully review all code before you commit it.

That is a big assumption. If everyone were doing that, this wouldn’t be a major issue. But as the curl developer has noted, people are using LLMs without thinking and wasting everyone’s time and resources.

https://www.linkedin.com/posts/danielstenberg_hackerone-curl...

I can attest to that. Just the other day I got a bug report, clearly written with the assistance of an LLM, for software which has been stable and used in several places for years. This person, when faced with an error on their first try, instead of pondering “what am I doing wrong” instead opened a bug report with a “fix”. Of course, they were using the software wrong. They did not follow the very short and simple instructions and essentially invented steps (probably suggested by an LLM) that caused the problem.

Waste of time for everyone involved, and one more notch on the road to causing burnout. Some of the worst kind of users are those who think “bug” means “anything which doesn’t immediately behave the way I thought it would”. LLMs empower them, to the detriment of everyone else.

replies(1): >>44389135 #
25. latexr ◴[] No.44387302{7}[source]
And not that long ago, the majority of the population believed the Earth is flat, and that cigarettes are good for your health. Radioactive toys were being sold to children.

Wide belief does not equal truth.

26. irthomasthomas ◴[] No.44387603[source]
My llm-consortium project was vibe coded. Some notes on how I did that in the announcement tweet if you click through https://x.com/karpathy/status/1870692546969735361
27. bredren ◴[] No.44387628{3}[source]
The parent claim was that devs don’t open-source their personal AI tools. FileKitty is mine and it is MIT-licensed on GitHub.

It began as an experiment in AI-assisted app design and a cross-platform “cat these files” utility.

Since then it has picked up:

- Snapshot history (and change flags) for any file selection

- A rendered folder tree that LLMs can digest, with per-prompt ignore filters

- String-based ignore rules for both tree and file output, so prompts stay surgical

My recent focus is making that generated context modular, so additional inputs (logs, design docs, architecture notes) can plug in cleanly. Apple’s new on-device foundation models could pair nicely with that.

The bigger point: most AI tooling hides the exact nature of context. FileKitty puts that step in the open and keeps the programmer in the loop.

I continue to believe LLMs can solve big problems with appropriate context and that intentionality in context prep is important step in evaluating ideas and implementation suggestions found in LLM outputs.

There's a Homebrew build available and I'd be happy to take contributions: https://github.com/banagale/FileKitty

28. alganet ◴[] No.44388915{7}[source]
Billions of people _say_ they believe in god. It's very different.

--

When you analyze church attendance, it drops to roughly 50% instead of 85% of the population:

https://en.wikipedia.org/wiki/Church_attendance#Demographics

If you start to investigate many aspects of religious belief, like how many christians read the bible, the numbers drop drastically to less than 15%

https://www.statista.com/statistics/299433/bible-readership-...

This demonstrates that we cannot rely on self-reporting to understand religious belief. In practice, most people are closer to atheists than believers.

replies(1): >>44389320 #
29. fc417fc802 ◴[] No.44389135{3}[source]
Sure I won't disagree that those people also exist but I don't think that's who the claim is being made about. Pointing out that subpar developers exist doesn't refute that good ones exist.
30. fc417fc802 ◴[] No.44389320{8}[source]
That's rather silly. Neither of those things is a requirement for belief.
replies(1): >>44389506 #
31. fc417fc802 ◴[] No.44389395{8}[source]
Obviously it's the basis for a religion. We're to have faith in the ability of LLMs. To ask for evidence of that is to question the divine. You can ask a model itself for the relevant tenants pertaining to any given situation.
32. alganet ◴[] No.44389506{9}[source]
You can believe all you want, but practice is what actually matters.

It's the same thing with AI.

33. ben_w ◴[] No.44390587{5}[source]
> The LLM work sounds dumb, and the suggestion that it made "a qr code generator" is disingenuous. The LLM barely did a frontend for it. Barely.

Yes, and?

The goal wasn't "write me a QR library" it was "here's my pain point, solve it".

> It sounds like the author payed for `npm install`, and thinks he's on top of things and being smart.

I can put this another way if you prefer:

  Running `npm install qrious`: trivial.
  Knowing qrious exists and how to integrate it into a page: expensive.
https://www.snopes.com/fact-check/know-where-man/

> > Built with Aider and either Sonnet 3.5 or Gemini 2.5 Pro

> Paid tools.

I get Sonnet 4 for free at https://claude.ai — I know version numbers are weird in this domain, but I kinda expect that means Sonnet 3.5 was free at some point? Was it not? I mean, 3.7 is also a smaller version number but listed as "pro", so IDK…

Also I get Gemini 2.5 Pro for free at https://aistudio.google.com

Just out of curiosity, I've just tried using Gemini 2.5 Pro (for free) myself to try this. The result points to a CDN of qrcodejs, which I assume is this, but don't know my JS libraries so can't confirm this isn't just two different ones with the same name: https://github.com/davidshimjs/qrcodejs

My biggest issue with this kind of thing in coding is the same as my problem with libraries in general: you're responsible for the result even if you don't read what the library (/AI) is doing. So, I expect some future equivalent of the npm left-pad incident — memetic monoculture, lots of things fail at the same time.

replies(1): >>44390745 #
34. alganet ◴[] No.44390745{6}[source]
> Knowing qrious exists and how to integrate it into a page: expensive.

qrious literally has it integrated already:

https://github.com/davidshimjs/qrcodejs/blob/master/index.ht...

I see many issues. The main one is that none of this is relevant to the qemu discussion. It's on another whole level of project.

I kind of regret asking the poor guy to show his stuff. None of these tutorial projects come even close to what an AI contribution to qemu would look like. It's pointless.

replies(1): >>44390893 #
35. ben_w ◴[] No.44390893{7}[source]
The very first part of the quotation is "Knowing qrious exists".

So the fact they've already got the example is great if you do in fact already have that knowledge, and *completely useless* if you don't.