←back to thread

490 points todsacerdoti | 10 comments | | HN request time: 1.264s | source | bottom
Show context
benlivengood ◴[] No.44383064[source]
Open source and libre/free software are particularly vulnerable to a future where AI-generated code is ruled to be either infringing or public domain.

In the former case, disentangling AI-edits from human edits could tie a project up in legal proceedings for years and projects don't have any funding to fight a copyright suit. Specifically, code that is AI-generated and subsequently modified or incorporated in the rest of the code would raise the question of whether subsequent human edits were non-fair-use derivative works.

In the latter case the license restrictions no longer apply to portions of the codebase raising similar issues from derived code; a project that is only 98% OSS/FS licensed suddenly has much less leverage in takedowns to companies abusing the license terms; having to prove that infringers are definitely using the human-generated and licensed code.

Proprietary software is only mildly harmed in either case; it would require speculative copyright owners to disassemble their binaries and try to make the case that AI-generated code infringed without being able to see the codebase itself. And plenty of proprietary software has public domain code in it already.

replies(8): >>44383156 #>>44383218 #>>44383229 #>>44384184 #>>44385081 #>>44385229 #>>44386155 #>>44387156 #
AJ007 ◴[] No.44383229[source]
I understand what experienced developers don't want random AI contributions from no-knowledge "developers" contributing to a project. In any situation, if a human is review AI code line by line that would tie up humans for years, even ignoring anything legally.

#1 There will be no verifiable way to prove something was AI generated beyond early models.

#2 Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects. The only room for debate on that is an apocalypse level scenario where humans fail to continue producing semiconductors or electricity.

#3 If a project successfully excludes AI contributions (not clear how other than controlling contributions to a tight group of anti-AI fanatics), it's just going to be cloned, and the clones will leave it in the dust. If the license permits forking then it could be forked too, but cloning and purging any potential legal issues might be preferred.

There still is a path for open source projects. It will be different. There's going to be much, much more software in the future and it's not going to be all junk (although 99% might.)

replies(16): >>44383277 #>>44383278 #>>44383309 #>>44383367 #>>44383381 #>>44383421 #>>44383553 #>>44383615 #>>44383810 #>>44384306 #>>44384448 #>>44384472 #>>44385173 #>>44386408 #>>44387925 #>>44389059 #
1. basilgohar ◴[] No.44383553[source]
I feel like this is mostly proofless assertion. I'm aware what you hint at is happening, but the conclusions you arrive at are far from proven or even reasonable at this stage.

For what it's worth, I think AI for code will arrive at a place like how other coding tools sit – hinting, intellisense, linting, maybe even static or dynamic analysis, but I doubt NOT using AI will be a critical asset to productivity.

Someone else in the thread already mentioned it's a bit of an amplifier. If you're good, it can make you better, but if you're bad it just spreads your poor skills like a robot vacuum spreads animal waste.

replies(2): >>44383595 #>>44384544 #
2. galangalalgol ◴[] No.44383595[source]
I think that was his point, the project full of bad developers isn't the competition. It is a peer whose skill matches yours and uses agents on top of that. By myself I am no match for myself + cline.
replies(1): >>44383889 #
3. Retric ◴[] No.44383889[source]
That’s true in the short term. Longer term it’s questionable as using AI tools heavily means you don’t remember all the details creating a new form of technical debt.
replies(2): >>44384106 #>>44384185 #
4. linsomniac ◴[] No.44384106{3}[source]
Dude, have you ever looked at code you wrote 6 months ago and gone "What was the developer thinking?" ;-)
replies(1): >>44384209 #
5. CamperBob2 ◴[] No.44384185{3}[source]
I don't need to remember much, really. I have tools for that.

Really, really good tools.

6. ringeryless ◴[] No.44384209{4}[source]
yes, constantly. I also don't remember much contextual domain info of a given section of code about 2 weeks into delving into some other part of the same app.

So-called AI makes this worse.

Let me remind you of gyms, now that humans have been saved of much manual activity...

replies(2): >>44384263 #>>44386960 #
7. linsomniac ◴[] No.44384263{5}[source]
>So-called AI makes this worse.

The AI tooling is also really, really good at being able to piece together the code, the contextual domain, the documentation, the tests, the related issues/tickets, it could even take the change history into account, and be able to help refresh your memory of unfamiliar code in the context of bugs or new changes you are looking at making.

Whether or not you go to the gym, you are probably going to want to use an excavator if you are going to dig a basement.

8. otabdeveloper4 ◴[] No.44384544[source]
IMO LLMs are best when used as locally-run offline search engines. This is a clear and obvious disruptive technology.

But we will need to get a lot better at finetuning first. People don't want generalist LLMs, they want "expert systems".

replies(1): >>44385353 #
9. danielbln ◴[] No.44385353[source]
Speak for yourself, I prefer generalist LLMs. Also, the bitter lesson of ML applies.
10. Dylan16807 ◴[] No.44386960{5}[source]
> So-called AI makes this worse.

I think that needs actual testing. At what time distances is there an effect, and how big is it? Even if there is an effect, it could be small enough that a mild productivity boost from AI is more important.