Most active commenters
  • TeMPOraL(12)
  • alganet(12)
  • amake(10)
  • ben_w(9)
  • fc417fc802(5)
  • viraptor(5)
  • linsomniac(4)
  • graemep(4)
  • Dylan16807(4)
  • latexr(4)

←back to thread

495 points todsacerdoti | 160 comments | | HN request time: 2.102s | source | bottom
1. benlivengood ◴[] No.44383064[source]
Open source and libre/free software are particularly vulnerable to a future where AI-generated code is ruled to be either infringing or public domain.

In the former case, disentangling AI-edits from human edits could tie a project up in legal proceedings for years and projects don't have any funding to fight a copyright suit. Specifically, code that is AI-generated and subsequently modified or incorporated in the rest of the code would raise the question of whether subsequent human edits were non-fair-use derivative works.

In the latter case the license restrictions no longer apply to portions of the codebase raising similar issues from derived code; a project that is only 98% OSS/FS licensed suddenly has much less leverage in takedowns to companies abusing the license terms; having to prove that infringers are definitely using the human-generated and licensed code.

Proprietary software is only mildly harmed in either case; it would require speculative copyright owners to disassemble their binaries and try to make the case that AI-generated code infringed without being able to see the codebase itself. And plenty of proprietary software has public domain code in it already.

replies(9): >>44383156 #>>44383218 #>>44383229 #>>44384184 #>>44385081 #>>44385229 #>>44386155 #>>44387156 #>>44391757 #
2. deadbabe ◴[] No.44383156[source]
If a software is truly wide open source in the sense of “do whatever the fuck you want with this code, we don’t care”, then it has nothing to fear from AI.
replies(3): >>44383181 #>>44383198 #>>44384127 #
3. candiddevmike ◴[] No.44383181[source]
Won't apply to closed source, not public code, which the GPL (QEMU uses) is quite good at ensuring becomes open source...
4. kgwxd ◴[] No.44383198[source]
Can't release someone else's proprietary source under a "do whatever the fuck you want" license and actually do whatever the fuck you want, without getting sued.
replies(4): >>44383241 #>>44384707 #>>44384793 #>>44384892 #
5. zer00eyz ◴[] No.44383218[source]
> or public domain

https://news.artnet.com/art-world/ai-art-us-copyright-office...

https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...

Im pretty sure that this ship has sailed.

replies(2): >>44383728 #>>44383771 #
6. AJ007 ◴[] No.44383229[source]
I understand what experienced developers don't want random AI contributions from no-knowledge "developers" contributing to a project. In any situation, if a human is review AI code line by line that would tie up humans for years, even ignoring anything legally.

#1 There will be no verifiable way to prove something was AI generated beyond early models.

#2 Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects. The only room for debate on that is an apocalypse level scenario where humans fail to continue producing semiconductors or electricity.

#3 If a project successfully excludes AI contributions (not clear how other than controlling contributions to a tight group of anti-AI fanatics), it's just going to be cloned, and the clones will leave it in the dust. If the license permits forking then it could be forked too, but cloning and purging any potential legal issues might be preferred.

There still is a path for open source projects. It will be different. There's going to be much, much more software in the future and it's not going to be all junk (although 99% might.)

replies(16): >>44383277 #>>44383278 #>>44383309 #>>44383367 #>>44383381 #>>44383421 #>>44383553 #>>44383615 #>>44383810 #>>44384306 #>>44384448 #>>44384472 #>>44385173 #>>44386408 #>>44387925 #>>44389059 #
7. deadbabe ◴[] No.44383241{3}[source]
It’d be like trying to squeeze blood from a stone
replies(2): >>44383680 #>>44383758 #
8. Eisenstein ◴[] No.44383277[source]
If AI can generate software so easily and which performs the expected functions, why do we even need to know that it did so? Isn't the future really just asking an AI for a result and getting that result? The AI would be writing all sorts of bespoke code to do the thing we ask, and then discard it immediately after. That is what seems more likely, and not 'so much software we have to figure out rights to'.
9. amake ◴[] No.44383278[source]
> #2 Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects

Still waiting to see evidence of AI-driven projects eating the lunch of "traditional" projects.

replies(4): >>44383368 #>>44383382 #>>44383858 #>>44386542 #
10. blibble ◴[] No.44383309[source]
> #2 Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects

"competitive", meaning: "most features/lines of code emitted" might matter to a PHB or Microsoft

but has never mattered to open source

11. alganet ◴[] No.44383367[source]
Quoting them:

> The policy we set now must be for today, and be open to revision. It's best to start strict and safe, then relax.

So, no need for the drama.

12. viraptor ◴[] No.44383368{3}[source]
It's happening slowly all around. It's not obvious because people producing high quality stuff have no incentive at all to mark their changes as AI-generated. But there are also local tools generated faster than you could adjust existing tools to do what you want. I'm running 3 things now just for myself that I generated from scratch instead of trying to send feature requests to existing apps I can buy.

It's only going to get more pervasive from now on.

replies(2): >>44383499 #>>44384560 #
13. A4ET8a8uTh0_v2 ◴[] No.44383381[source]
I am of two minds of it having now seen both good coders augmented by AI and bad coders further diminished by it ( I would even argue its worse than stack overflow, because back then they would at least would have had to adjust code a little bit ).

I am personally somewhere in the middle, just good enough to know I am really bad at this so I make sure that I don't contribute to anything that is actually important ( like QEMU ).

But how many people recognize their own strengths and weaknesses? That is part of the problem and now we are proposing that even that modicum of self-regulation ( as flawed as it is ) be removed.

FWIW, I hear you. I also don't have an answer. Just thinking out loud.

14. luqtas ◴[] No.44383382{3}[source]
that's like driving big personal vehicles and having a bunch of children and eating a bunch of meat and do nothing about because marine and terrestrial ecosystems weren't fully destroyed by global warming
replies(1): >>44384303 #
15. rapind ◴[] No.44383421[source]
> If a project successfully excludes AI contributions (not clear how other than controlling contributions to a tight group of anti-AI fanatics), it's just going to be cloned, and the clones will leave it in the dust.

Yeah I don’t think so. But if it does then who cares? AI can just make a better QEMU at that point I guess.

They aren’t hurting anyone with this stance (except the AI hype lords), which I’m pretty sure isn’t actually an anti-AI stance, but a pragmatic response to AI slop in its current state.

16. alganet ◴[] No.44383499{4}[source]
Can you show these 3 things to us?
replies(4): >>44383630 #>>44383710 #>>44383844 #>>44384062 #
17. basilgohar ◴[] No.44383553[source]
I feel like this is mostly proofless assertion. I'm aware what you hint at is happening, but the conclusions you arrive at are far from proven or even reasonable at this stage.

For what it's worth, I think AI for code will arrive at a place like how other coding tools sit – hinting, intellisense, linting, maybe even static or dynamic analysis, but I doubt NOT using AI will be a critical asset to productivity.

Someone else in the thread already mentioned it's a bit of an amplifier. If you're good, it can make you better, but if you're bad it just spreads your poor skills like a robot vacuum spreads animal waste.

replies(2): >>44383595 #>>44384544 #
18. galangalalgol ◴[] No.44383595{3}[source]
I think that was his point, the project full of bad developers isn't the competition. It is a peer whose skill matches yours and uses agents on top of that. By myself I am no match for myself + cline.
replies(1): >>44383889 #
19. XorNot ◴[] No.44383615[source]
A reasonable conclusion about this would simply be that the developers are saying "we're not merging anything which you can't explain".

Which is entirely reasonable. The trend of people say, on HN saying "I asked an LLM and this is what it said..." is infuriating.

It's just an upfront declaration that if your answer to something is "it's what Claude thinks" then it's not getting merged.

replies(1): >>44383641 #
20. WD-42 ◴[] No.44383630{5}[source]
For some reason these fully functional ai generated projects that the authors vibe out while playing guitar and clipping their toenails are never open source.
replies(6): >>44383999 #>>44384026 #>>44384847 #>>44385049 #>>44386161 #>>44387603 #
21. Filligree ◴[] No.44383641{3}[source]
That’s not what the policy says, however. You could be the world’s most honest person, using Claude only to generate code you described to it in detail and fully understand, and would still be forbidden.
22. clipsy ◴[] No.44383680{4}[source]
It'd be like trying to squeeze blood from every single entity using the offending code, actually.
23. viraptor ◴[] No.44383710{5}[source]
Only the simplest one is open (and before you discount it as too trivial, somehow none of the other ones did what I wanted) https://github.com/viraptor/pomodoro

The others are just too specific for me to be useful for anyone else: an android app for automatic processing of some text messages and a work scheduling/prioritising thing. The time to make them generic enough to share would be much longer than creating my specific version in the first place.

replies(2): >>44385131 #>>44385726 #
24. raincole ◴[] No.44383728[source]
It's sailed, but towards the other way: https://www.bbc.com/news/articles/cg5vjqdm1ypo
replies(3): >>44384064 #>>44384264 #>>44385693 #
25. CursedSilicon ◴[] No.44383758{4}[source]
It's incredible watching someone who has no idea what they're talking about boast so confidently about what people "can" or "can't" do
26. jssjsnj ◴[] No.44383771[source]
QEMU: Define policy forbidding use of AI code generators
27. heavyset_go ◴[] No.44383810[source]
Regarding #1, at least in the mainframe/cloud model of hosted LLMs, the operators have a history of model prompts and outputs.

For example, if using Copilot, Microsoft also has every commit ever made if the project is on GitHub.

They could, theoretically, determine what did or didn't come out of their models and was integrated into source trees.

Regarding #2 and #3, with relatively novel software like QEMU that models platforms that other open source software doesn't, LLMs might not be a good fit for contributions. Especially where emulation and hardware accuracy, timing, quirks, errata etc matter.

For example, modeling a new architecture or emulating new hardware might have LLMs generating convincing looking nonsense. Similarly, integrating them with newly added and changing APIs like in kvm might be a poor choice for LLM use.

28. nijave ◴[] No.44383844{5}[source]
Not sure about parent but you could argue Jetbrains fancy auto complete is AI and generates a substantial portion of code. It runs using a local model and, in my experience, does pretty good at guessing the rest of the line with minimal input (so you could argue 80% of each line was AI generated)
29. mcoliver ◴[] No.44383858{3}[source]
80-90% of Claude is now written by Claude
replies(3): >>44383940 #>>44385267 #>>44386588 #
30. Retric ◴[] No.44383889{4}[source]
That’s true in the short term. Longer term it’s questionable as using AI tools heavily means you don’t remember all the details creating a new form of technical debt.
replies(2): >>44384106 #>>44384185 #
31. 0x457 ◴[] No.44383940{4}[source]
And whose lunch is it eating?
replies(2): >>44384384 #>>44384772 #
32. dcow ◴[] No.44383999{6}[source]
Except this one is (see your sibling).
33. fc417fc802 ◴[] No.44384026{6}[source]
> the authors vibe out while playing guitar and clipping their toenails

I don't think anyone is claiming that. If you submit changes to a FOSS project and an LLM assisted you in writing them how would anyone know? Assuming at least that you are an otherwise competent developer and that you carefully review all code before you commit it.

The (admittedly still controversial) claim being made is that developers with LLM assistance are more productive than those without. Further, that there is little incentive for such developers to advertise this assistance. Less trouble for all involved to represent it as 100% your own unassisted work.

replies(2): >>44384302 #>>44387261 #
34. linsomniac ◴[] No.44384062{5}[source]
Not OP, but:

I'm getting towards the end of a vibe coded ZFS storage backend to ganeti that includes the ability to live migrate VMs to another host by: taking snapshot and replicating it to target, pausing VM, taking another incremental snapshot and replicating it, and then unpausing the VM on the new destination machine. https://github.com/linsomniac/ganeti/tree/newzfs

Other LLM tools I've built this week:

This afternoon I built a web-based SQL query editor/runner with results display, for dev/ops people to run read-only queries against our production database. To replace an existing super simple one, and add query syntax highlighting, snippet library, and other modern features. I can probably release this though I'd need to verify that it won't leak anything. Targets SQL Server.

A couple CLI Jira tools to pull a list of tickets I'm working on (with cache so I can get an immediate response, then get updates after Jira response comes back), and tickets with tags that indicate I have to handle them specially.

An icinga CLI that downtimes hosts, for when we do sweeping machine maintenances like rebooting a VM host with dozens of monitored children.

An Ansible module that is a "swiss army knife" for filesystem manipulation, merging the functions of copy, template, file, so you can loop over a list and: create a directory, template a couple files into it, doing a notify on one and a when on another, ensure a file exists if it doesn't already, to reduce duplication of boilerplate when doing a bunch of file deploys. This I will release as a ansible galaxy module once I have it tested a little more.

replies(4): >>44384291 #>>44384462 #>>44384571 #>>44387065 #
35. fc417fc802 ◴[] No.44384064{3}[source]
That's a brand new ongoing lawsuit. The ship hasn't sailed in either direction yet. It hasn't even been clearly established if Midjourney has liability let alone where the bounds for such liability might lie.

Remember, anyone can attempt to sue anyone for anything at any time in a functional system. How far the suit makes it is a different matter.

36. linsomniac ◴[] No.44384106{5}[source]
Dude, have you ever looked at code you wrote 6 months ago and gone "What was the developer thinking?" ;-)
replies(1): >>44384209 #
37. behringer ◴[] No.44384127[source]
Open source is about sharing the source code. You generally need to force companies to share their source code derived from your project, or else companies will simply take it, modify it, and never release their changes,and charge for it too.
replies(1): >>44384724 #
38. koolala ◴[] No.44384184[source]
This is a win for MIT license though.
replies(1): >>44385264 #
39. CamperBob2 ◴[] No.44384185{5}[source]
I don't need to remember much, really. I have tools for that.

Really, really good tools.

40. ringeryless ◴[] No.44384209{6}[source]
yes, constantly. I also don't remember much contextual domain info of a given section of code about 2 weeks into delving into some other part of the same app.

So-called AI makes this worse.

Let me remind you of gyms, now that humans have been saved of much manual activity...

replies(2): >>44384263 #>>44386960 #
41. linsomniac ◴[] No.44384263{7}[source]
>So-called AI makes this worse.

The AI tooling is also really, really good at being able to piece together the code, the contextual domain, the documentation, the tests, the related issues/tickets, it could even take the change history into account, and be able to help refresh your memory of unfamiliar code in the context of bugs or new changes you are looking at making.

Whether or not you go to the gym, you are probably going to want to use an excavator if you are going to dig a basement.

42. zer00eyz ◴[] No.44384264{3}[source]
https://www.wired.com/story/ai-art-copyright-matthew-allen/

https://www.cnbc.com/2025/03/19/ai-art-cannot-be-copyrighted...

Here are cases where the product of AI/ML are not the products of people and not capable of being copyrighted. These are about the OUTPUT being unable to be copyrighted.

43. EGreg ◴[] No.44384291{6}[source]
I vibe-coded my own MySQL-compatible database that performs better than MariaDB, after my agent optimized it for 12 hours. It is also a time-traveling DB and performs better on all benchmarks and the AI says it is completely byzantine-fault-tolerant. Programmers, you had a nice run. /s
44. EGreg ◴[] No.44384302{7}[source]
Why would you need to carefully review code? That is so 2024. You’re bottlenecking the process and are at a disadvantage when the AI could be working 24/7. We have AI agents that have been trained to review thousands of PRs that are produced by other, generative agents, and together they have already churned out much more software than human teams can write in a year.

AI “assistance” is a short intermediate phase, like the “centaurs” that Garry Kasparov was very fond of (human + computer beat both a human and a computer by itself… until the computer-only became better).

https://en.wikipedia.org/wiki/Advanced_chess

replies(1): >>44384547 #
45. lynx97 ◴[] No.44384303{4}[source]
Ahh, there you go, environmental activists outright saying having children is considered a crime against nature. Wonderful, you seem to hit a rather bad stereotype right on the head. What is next? Earth would be better of if humanity was eradicated?
replies(1): >>44388672 #
46. safety1st ◴[] No.44384306[source]
It seems to me that the point in your first paragraph argues against your points #2 and #3.

If a project allows AI generated contributions, there's a risk that they'll be flooded with low quality contributions that consume human time and resources to review, thus paralyzing the project - it'd be like if you tried to read and reply to every spam email you receive.

So the argument goes that #2 and #3 will not materialize, blanket acceptance of AI contributions will not help projects become more competitive, it will actually slow them down.

Personally I happen to believe that reality will converge somewhere in the middle, you can have a policy which says among other things "be measured in your usage of AI," you can put the emphasis on having contributors do other things like pass unit tests, and if someone gets spammy you can ban them. So I don't think AI is going to paralyze projects but I also think its role in effective software development is a bit narrower than a lot of people currently believe...

47. devmor ◴[] No.44384448[source]
None of your claims here are based in factual assertion. These are unproven, wishful fantasies that may or may not be eventually true.

No one should be evaluating or writing policy based on fantasy.

replies(1): >>44384738 #
48. cess11 ◴[] No.44384462{6}[source]
Looks like two commits:

https://github.com/linsomniac/ganeti/commit/e91766bfb42c67ab...

https://github.com/linsomniac/ganeti/commit/f52f6d689c242e3e...

replies(1): >>44384492 #
49. otabdeveloper4 ◴[] No.44384472[source]
> Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects.

There is zero evidence so far that AI improves software developer efficiency.

No, just because you had fun vibing with a chatbot doesn't mean you delivered the end product faster. All of the supposed AI software development gains are entirely self-reported based on "vibes". (Remember these are the same people who claimed massive developer efficiency gains from programming in Haskell or Lisp a few years back.)

Note I'm not even touching on the tech debt issue here, but it is also important.

P.S. The hallucination and counting to five problems will never go away. They are intrinsic to the LLM approach.

50. linsomniac ◴[] No.44384492{7}[source]
Thanks, I hadn't pushed from my test cluster, check again. "This branch is 12 commits ahead of, 4 commits behind ganeti/ganeti:master"
51. otabdeveloper4 ◴[] No.44384544{3}[source]
IMO LLMs are best when used as locally-run offline search engines. This is a clear and obvious disruptive technology.

But we will need to get a lot better at finetuning first. People don't want generalist LLMs, they want "expert systems".

replies(1): >>44385353 #
52. amake ◴[] No.44384547{8}[source]
> We have AI agents that have been trained to review thousands of PRs that are produced by other, generative agents, and together they have already churned out much more software than human teams can write in a year.

Was your comment tongue-in-cheek? If not, where is this huge mass of AI-generated software?

replies(1): >>44384719 #
53. amake ◴[] No.44384560{4}[source]
> It's not obvious because people producing high quality stuff have no incentive at all to mark their changes as AI-generated

I feel like we'd be hearing from business that crushed their competition by delivering faster or with fewer people. Where are those businesses?

> But there are also local tools generated

This is really not the same thing as the original claim ("Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects").

replies(4): >>44384773 #>>44384781 #>>44388220 #>>44388568 #
54. amake ◴[] No.44384571{6}[source]
None of this seems relevant to the original claim: "Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects"

I don't feel like it's meaningful to discuss the "competitiveness" of a handful of bespoke local or internal tools.

55. iechoz6H ◴[] No.44384707{3}[source]
You can do that but the fact you don't get sued is more luck than judgement.
56. rvnx ◴[] No.44384719{9}[source]
All around you, just that it doesn’t make sense for developers to reveal that a lot of their work is now about chunking and refining the specifications written by the product owner.

Admitting such is like admitting you are overpaid for your job, and that a 20 USD AI-agent can do better and faster than you for 75% of the work.

Is it easy to admit that you have learnt skills for 10+ years that are progressively already getting replaced by a machine ? (like thousands of jobs in the past).

More and more, developer is going to be a monkey job where your only task is to make sure there is enough coal in the steam machine.

Compilers destroyed the jobs of developers writing assembler code, they had to adapt. They insisted that hand-written assembler was better.

Here is the same, except you write code in natural language. It may not be optimal in all situations but it often gets the job done.

replies(3): >>44384964 #>>44385243 #>>44385508 #
57. TeMPOraL ◴[] No.44384724{3}[source]
Sharing is caring, being forced to share does not foster care.

Companies don't care, so if you release something as open source that's relevant to them, "companies will simply take it, modify it, and never release their changes,and charge for it too" - but that is what companies do, that is their very nature, and you knew that when you first opened the source.

You also knew that when you picked a license, and it's a major reason for the particular choice you made. Want to force companies to share? Pick GPL.

If you decide to yoke a dragon, and it instead snatches your shiny lure and flies away to its cave, you don't get to complain that the dragon isn't playing nice and doesn't want to become your beast of burden. If you picked MIT as your license, that's on you.

58. brabel ◴[] No.44384738{3}[source]
Are you familiar with the futures market? It’s all about what you call fantasy ! Similarly, if you are determining the strategy of your organization, all you have to help you is “fantasy”. By the time evidence exists in sufficient quantity your lunch has already been eaten long ago. A good CEO is one that can see where the market is going before anyone else. You may be right that AI is just a fad , but given how much the big companies and all the major startups in the last few years are investing on it, it’s overwhelmingly a fringe position to have at this point.
replies(1): >>44387203 #
59. rvnx ◴[] No.44384772{5}[source]
Your lunch, the developers behind Claude are very rich and do not need their developer career since they have enough to retire
60. bredren ◴[] No.44384773{5}[source]
This is happening right now and it won’t be obvious until the liquidity events provide enough cover for victory lap story telling.

The very knowledge that an organization is experiencing hyper acceleration due to its successful adoption of AI across the enterprise is proprietary.

There are no HBS case studies about businesses that successfully established and implemented strategic pillars for AI because the pillars were likely written in the past four months.

replies(1): >>44385248 #
61. TeMPOraL ◴[] No.44384781{5}[source]
> I feel like we'd be hearing from business that crushed their competition by delivering faster or with fewer people. Where are those businesses?

As if tech part was the major part of getting the product to market.

Those businesses are probably everywhere. They just aren't open about admitting they're using AI to speed up their marketing/product design/programming/project management/graphics design, because a) it's not normal outside some tech startup sphere to brag about how you're improving your internal process, and b) because almost everyone else is doing that too, so it partially cancels out - that is what competition on the market means, and c) admitting to use of AI in current climate is kind of a questionable PR move.

WRT. those who fail to leverage the new tools and are destined to be outcompeted, this process takes extended time, because companies have inertia.

>> But there are also local tools generated

> This is really not the same thing as the original claim

Point is that such wins compound. You get yak shaving done faster by fashioning your own tools on the fly, and it also cuts cost and a huge burden of maintaining relationships with third parties[0]

--

[0] - Because each account you create, each subscription you take, even each online tool you kinda track and hope hope hope won't disappear on you - each such case comes with a cognitive tax of a business relationship you probably didn't want, that often costs you money directly, and that you need to keep track of.

replies(3): >>44385254 #>>44386450 #>>44386859 #
62. rzzzt ◴[] No.44384793{3}[source]
The license does exist so you can release your own software under it, however: https://en.wikipedia.org/wiki/WTFPL
63. bredren ◴[] No.44384847{6}[source]
Mine is. And it is awesome: https://github.com/banagale/FileKitty

The most recent release includes a MacOS build in a dmg signed by Apple: https://github.com/banagale/FileKitty/releases/tag/v0.2.3

I vibed that workflow just so more people could have access to this tool. It was a pain and it actually took time away from toenail clipping.

And while I didn't lay hands on a guitar much during this period, I did manage to build this while bouncing between playing Civil War tunes on a 3D-printed violin and generating music in Suno for a soundtrack to “Back on That Crust,” the missing and one true spiritual successor to ToeJam & Earl: https://suno.com/song/e5b6dc04-ffab-4310-b9ef-815bdf742ecb

replies(1): >>44385810 #
64. TeMPOraL ◴[] No.44384892{3}[source]
Only more reason for OSS to embrace AI generation - once it leaks into enough widely used or critical (think cURL) dependencies and exceeds certain critical mass, any judgement on the IP aspects other than "public domain" (in the broader sense) will become infeasible, as enforcing a different judgement would be like doing open heart surgery on the global economy.
replies(1): >>44386686 #
65. bonzini ◴[] No.44384964{10}[source]
Good luck debugging
replies(1): >>44391914 #
66. TeMPOraL ◴[] No.44385049{6}[source]
Going by the standard of "But there are also local tools generated faster than you could adjust existing tools to do what you want", here's a random one of mine that's in regular use by my wife:

https://github.com/TeMPOraL/qr-code-generator

Built with Aider and either Sonnet 3.5 or Gemini 2.5 Pro (I forgot to note that down in this project), and recently modified with Claude Code because I had to test it on something.

Getting the first version of this up was literally both faster and easier than finding a QR code generator that I'm sure is not bloated, not bullshit, not loaded with trackers, that's not using shorteners or its own URL (it's always a stupid idea to use URL shorteners you don't control), not showing ads, mining bitcoin and shit, one that my wife can use in her workflow without being distracted too much. Static page, domain I own, a bit of fiddling with LLMs.

What I can't link to is half a dozen single-use tools or faux tools created on the fly as part of working on something. But this happens to me couple times a month.

To anchor another vertex in this parameter space, I found it easier and faster to ask LLM to build me a "breathing timer" (one that counts down N seconds and resets, repeatedly) with analog indicator by requesting it, because a search query to Google/Kagi would be of comparable length, and then I'd have to click on results!

EDIT: Okay, another example:

https://github.com/TeMPOraL/tampermonkey-scripts/blob/master...

It overlays a trivial UI to set up looping over a segment of any YouTube video, and automatically persists the setting by video ID. It solves the trivial annoyance of channel jingles and other bullshit at start/end of videos that I use repeatedly as background music.

This was mostly done zero-shot by Claude, with maybe two or three requests for corrections/extra features, total development time maybe 15 minutes. I use it every day all the time ever since.

You could say, "but SponsorBlock" or whatever, but per what GP wrote, I just needed a small fraction of functionality of the tools I know exist, and it was trivial to generate that with AI.

replies(1): >>44385640 #
67. Thorrez ◴[] No.44385081[source]
Is there any likelihood that the output of the model would be public domain? Even if the model itself is public domain, the prompt was created by a human and impacted the output, so I don't see how the output could be public domain. And then after that, the output was hopefully reviewed by the original prompting human and likely reviewed by another human during code review, leading to more human impact on the final code.
replies(1): >>44385405 #
68. a57721 ◴[] No.44385131{6}[source]
> and before you discount it as too trivial, somehow none of the other ones did what I wanted

No offense, it's really great that you are able to make apps that do exactly what you want, but your examples are not very good to show that "software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects" (as someone else suggested above). Complex real world software is different from pomodoro timers and TODO lists.

replies(2): >>44386287 #>>44388498 #
69. gadders ◴[] No.44385173[source]
I am guessing they don't need people to prove that contributions didn't contain AI code, they just need the contributor to say they didn't use any AI code. That way, if any AI code is found in their contribution the liability lies with the contributor (but IANAL).
replies(1): >>44385238 #
70. graemep ◴[] No.44385229[source]
Proprietary source code would not usually end up training LLMs. Unless its leaked, how would an LLM have access to it?

> it would require speculative copyright owners to disassemble their binaries

I wonder whether AI might be a useful tool for making that easier.

If you have evidence then you can get courts to order disclosure or examination of code.

> And plenty of proprietary software has public domain code in it already.

I am pretty sure there is a significant amount of proprietary code that has FOSS code in it, against license terms (especially GPL and similar).

A lot of proprietary code is now been written using AIs trained on FOSS code, and companies are open about this. It might open an interesting can of worms.

replies(2): >>44385241 #>>44386080 #
71. graemep ◴[] No.44385238{3}[source]
AFAIK in most places it might help with the amount of damages, but does not let you off the hook.
72. physicsguy ◴[] No.44385241[source]
> Unless its leaked

Given the number of people on HN that say they're using for e.g. Cursor, OpenAI, etc. through work, and my experience with workplaces saying 'absolutely you can't use it', I suspect a large amount is being leaked.

replies(1): >>44386188 #
73. amake ◴[] No.44385243{10}[source]
> All around you, just that it doesn’t make sense for developers to reveal that

OK, but I asked for evidence and people just keep not providing any.

"God is all around you; he just works in mysterious ways"

OK, good luck with that.

replies(2): >>44386914 #>>44388626 #
74. amake ◴[] No.44385248{6}[source]
> This is happening right now and it won’t be obvious until

I asked for evidence and, as always, lots of people are popping out of the woodwork to swear that it's true but I can't see the evidence yet.

OK, then. Good luck with that.

75. amake ◴[] No.44385254{6}[source]
> Those businesses are probably everywhere. They just aren't open about admitting

"Where's the evidence?" "Probably everywhere."

OK, good luck, have fun

replies(1): >>44385535 #
76. graemep ◴[] No.44385264[source]
From what point of view?

For someone using MIT licensed code for training, it still requires a copy of the license and the copyright notice in "copies or substantial portions of the software". SO I guess its fine for a snippet, but if the AI reproduces too much of it, then its in breach.

From the point of view of someone who does not want their code used by an LLM then using GPL code is more likely to be a breach.

77. amake ◴[] No.44385267{4}[source]
Using AI tools make AI tools is not the impact outside of the AI bubble that people are looking for.
78. danielbln ◴[] No.44385353{4}[source]
Speak for yourself, I prefer generalist LLMs. Also, the bitter lesson of ML applies.
79. AndrewDucker ◴[] No.44385405[source]
There is no copyright in AI art. Presumably the same reasoning would apply to AI code: https://iclg.com/news/22400-us-court-confirms-ai-generated-a...
replies(1): >>44385865 #
80. alganet ◴[] No.44385508{10}[source]
I have a complete proof that P=NP but it doesn't make sense to reveal to the world that now I'm god. It would crush their little hearts.
replies(1): >>44386572 #
81. TeMPOraL ◴[] No.44385535{7}[source]
Yup. Or, "Just look around!".
replies(2): >>44385722 #>>44387258 #
82. alganet ◴[] No.44385640{7}[source]
Your QR generator is actually a project written by humans repackaged:

https://github.com/neocotic/qrious

All the hard work was made by humans.

I can do `npm install` without having to pay for AI, thanks.

replies(1): >>44386366 #
83. gwd ◴[] No.44385693{3}[source]
On the contrary. IANAL, but this is my understanding of the law (setting aside the "work for hire" thing for simplicity)

1. If you come up with something completely new, you are the sole copyright holder.

2. If you take someone else's copyrighted work and transform it, then both of you have a copyright on the derivative work.

So if you write a brand new comic book that includes Darth Vader, you can't sell that without Disney's permission [1]: they have a copyright on Darth Vader, and so your comic book is partly copyrighted by them. But at the same time, they can't sell it without your permission, because you have a copyright on the comic book too.

In the case of Midjourney outputs, my understanding of the current state of the law is this:

1. Only humans can create copyrights

2. So if Midjourney creates an entirely new image that's not derivative of anyone else's work (as defined by long-established copyright law on derivative works), then nobody owns the copyright, and it's in the public domain

3. If Midjourney creates an image that is derived from someone else's work (as defined by long established copyright law on derivative works), then only Disney has a copyright on that derivative work.

And so, in theory, Disney could distribute Darth Vader images you made with Midjourney, unless you can convince the court that you had enough creative influence over them to warrant a copyright.

[1] Yes of course fair use, trying to make a point here

replies(1): >>44385770 #
84. amake ◴[] No.44385722{8}[source]
If it was self-evident then I wouldn’t need to ask for evidence. And I imagine you wouldn’t need to be waving your hands making excuses for the lack of evidence.
replies(1): >>44391295 #
85. alganet ◴[] No.44385726{6}[source]
> The time to make them generic enough to share would be much longer than creating my specific version in the first place

Welcome to the reality of software development. "Works on my machine" is often not good enough to make the cut.

replies(1): >>44386257 #
86. andreasmetsala ◴[] No.44385770{4}[source]
Doesn’t this also mean that if you transform the work created by Midjourney, you now have a copyright on the transformed work?

I wonder what counts for transformed, is a filter enough or does it have to be more than that?

replies(1): >>44386178 #
87. fingerlocks ◴[] No.44385810{7}[source]
This app is concatenating files with an extra line of metadata added? You know this could be done in a few lines of shell script? You can then make it a finder action extension so it’s part of the system file manager app.
replies(2): >>44386131 #>>44387628 #
88. lars_francke ◴[] No.44385865{3}[source]
This particular case is US only.

The rest of the world might decide differently.

replies(1): >>44385878 #
89. AndrewDucker ◴[] No.44385878{4}[source]
Absolutely.

And as long as you're not worried about people in the USA reusing your code then you're all good!

90. pmlnr ◴[] No.44386080[source]
Licence incompatibility is enough.
91. pwm ◴[] No.44386131{8}[source]
Sic transit gloria mundi
92. strogonoff ◴[] No.44386155[source]
People sometimes miss that copyleft is powered by copyright. Copyleft (which means Linux, Blender, and plenty of other goodness) needs the ability to impose some rules on what users do with your work, presumably in the interest of common good. Such ability implies IP ownership.

This does not mean that powerful interests abusing copyright with ever increasing terms and enforcement overreach is fair game. It harms common interest.

However, it does mean that abusing copyright from the other side and denouncing the core ideas of IP ownership—which is now sort of in the interest of certain companies (and capital heavily invested in certain fashionable but not yet profitable startups) based around IP expropriation—harms common interest just as well.

replies(1): >>44386212 #
93. Philpax ◴[] No.44386161{6}[source]
Here's Armin Ronacher describing his open-source "sloppy XML" parser that he had AI write with his guidance from this week: https://lucumr.pocoo.org/2025/6/21/my-first-ai-library/
replies(1): >>44387171 #
94. gwd ◴[] No.44386178{5}[source]
That's my understanding, yes. "What counts as transformed" is fuzzy, but it's an old well-established problem with hundreds of years of case law.
95. graemep ◴[] No.44386188{3}[source]
I thought most of these did not use users context and input for training?
96. ben_w ◴[] No.44386212[source]
While this is a generally true statement (and has echoes in other areas like sovereign citizens), GenAI may make copyright (and copyleft) economically redundant.

While the AI we have now is not good enough to make an entire operating system when asked*, if/when they can, the benefits of all the current licensing models evaporate, and it doesn't matter if that model is proprietary with no source, or GPL, or MIT, because by that point anyone else can reproduce your OS for whatever the cost of tokens is without ever touching your code.

But as we're not there yet, I agree with @benlivengood that (most**) OSS projects must treat GenAI code as if it's unusable.

* At least, not a modern OS. I've not tried getting any model to output a tiny OS that would fit in a C64, and while I doubt they can currently do this, it is a bet I might lose, whereas I am confident all models would currently fail at e.g. reproducing Windows XP.

** I think MIT licensed projects can probably use GenAI code, they're not trying to require derivatives to follow the same licence, but I'm not a lawyer and this is just my barely informed opinion from reading the licenses.

replies(1): >>44387145 #
97. viraptor ◴[] No.44386257{7}[source]
It doesn't matter that my thing doesn't generalise if someone can build their own customised solution quickly. But also, if I wanted to sell it or distribute it, I'd ensure it was more generic from the beginning.
replies(1): >>44387068 #
98. viraptor ◴[] No.44386287{7}[source]
Cut it out with patronising, I work with complex software, which is why I specifically mentioned the only example I published was simple.

> but your examples are not very good to show that "software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects"

Here's the thing though - it's already the case, because I wouldn't create those tools but hand otherwise. I just don't have the time, and they're too personal/edge-case to pay anyone to make them. So the comparison in this case is between 100% human developed non-existent software and AI generated project which exists. The latter wins in every category by default.

replies(2): >>44386871 #>>44389582 #
99. ben_w ◴[] No.44386366{8}[source]
I am reminded of a meme about musicians. Not well enough to find it, but it was something like this:

  Real musicians don’t mix loops they bought.
  Real musicians make their own synth patches.
  Real musicians build their own instruments.
  Real musicians hand-forge every metal component in their instruments.
  …
  They say real musicians raise goats for the leather for the drum-skins, but I wouldn't know because I haven’t made any music in months and the goats smell funny.
There's two points here:

1) even though most of people on here know what npm is, many of us are not web developers and don't really know how to turn a random package into a useful webapp.

2) The AI is faster than googling a finished product that already exists, not just as an NPM package, but as a complete website.

Especially because search results require you to go through all the popups everyone stuffs everywhere because cookies, ads, before you even find out if it was actually a scam where the website you went to first doesn't actually do the right thing (or perhaps *anything*) anyway.

It is also, for many of us, the same price: free.

replies(2): >>44387121 #>>44387137 #
100. conartist6 ◴[] No.44386408[source]
#2 is a complete and total fallacy, trivially disprovable.

Overall velocity doesn't come from writing a lot more code, or even from writing code especially quickly.

101. conartist6 ◴[] No.44386450{6}[source]
And because from the outside everything looks worse than ever. Worse quality, no more support, established companies going crazy to cut costs. AI slop is replacing thoughtful content across the web. Engineering morale is probably at an all time low for my 20 years watching this industry...

So my question is: if so many people should be bragging to me and celebrating how much better things are, why does it look to me like they are worse and everyone is miserable about it...?

replies(1): >>44391966 #
102. ben_w ◴[] No.44386542{3}[source]
How can you tell which project is which?

I mean, sure, there's plenty of devs who refuse to use AI, but how many projects rather than individuals are in each category?

And is Microsoft "traditional"? I name them specifically because their CEO claims 20-30% of their new code is AI generated: https://techcrunch.com/2025/04/29/microsoft-ceo-says-up-to-3...

103. ben_w ◴[] No.44386572{11}[source]
P = NP is less "crush their little hearts", more "may cause widespread heart attacks across every industry due to cryptography failing, depending on if the polynomial exponent is small enough".
replies(1): >>44386766 #
104. brahma-dev ◴[] No.44386588{4}[source]
Cigarettes do not cause cancer.
105. windward ◴[] No.44386686{4}[source]
That's the situation we're already in with copyleft licences but legal teams still treat them like the plague.
106. Dylan16807 ◴[] No.44386766{12}[source]
A very very big if.

Also a sufficiently good exponential solver would do the same thing.

107. guappa ◴[] No.44386859{6}[source]
> They just aren't open about admitting they're using AI to speed up their marketing/product design/programming/project management/graphics design

Sure… they'd hate to get money thrown at them from investors.

replies(1): >>44391512 #
108. Dylan16807 ◴[] No.44386871{8}[source]
I don't think they're being patronizing, it's that "simple personal app that was barely worth making" is nice to have but not at all what they want evidence of.
replies(1): >>44386981 #
109. rvnx ◴[] No.44386914{11}[source]
Billions of people believe in god(s). In fact, 75 to 85% of the world population, btw.
replies(3): >>44386983 #>>44387302 #>>44388915 #
110. Dylan16807 ◴[] No.44386960{7}[source]
> So-called AI makes this worse.

I think that needs actual testing. At what time distances is there an effect, and how big is it? Even if there is an effect, it could be small enough that a mild productivity boost from AI is more important.

111. viraptor ◴[] No.44386981{9}[source]
Whether it was worth making is for me to judge since it is a personal app. It improves my life and work, so yes, it was very much worth it.
replies(1): >>44387791 #
112. amake ◴[] No.44386983{12}[source]
And?
replies(1): >>44389395 #
113. alganet ◴[] No.44387065{6}[source]
All the features you mentioned are not coming from the AI.

Here it is invoking the actual zfs commands:

https://github.com/ganeti/ganeti/compare/master...linsomniac...

All the extra python boilerplate just makes it harder to understand IMHO.

replies(1): >>44388207 #
114. alganet ◴[] No.44387068{8}[source]
You need to put your money where your mouth is.

If you comment about AI generated code in a thread about qemu (mission-critical project that many industries rely upon), a pomodoro app is not going to do the trick.

And no, it doesn't "show that is possible". qemu is not only more complex, it's a whole different problem space.

115. latexr ◴[] No.44387121{9}[source]
> I am reminded of a meme about musicians. Not well enough to find it

You only need to search for “loops goat skin”. You’re butchering the quote and its meaning quite a bit. The widely circulated version is:

> I thought using loops was cheating, so I programmed my own using samples. I then thought using samples was cheating, so I recorded real drums. I then thought that programming it was cheating, so I learned to play drums for real. I then thought using bought drums was cheating, so I learned to make my own. I then thought using premade skins was cheating, so I killed a goat and skinned it. I then thought that that was cheating too, so I grew my own goat from a baby goat. I also think that is cheating, but I’m not sure where to go from here. I haven’t made any music lately, what with the goat farming and all.

It’s not about “real musicians”¹ but a personal reflection on dependencies and abstractions and the nature of creative work and remixing. Your interpretation of it is backwards.

¹ https://en.wikipedia.org/wiki/No_true_Scotsman

116. alganet ◴[] No.44387137{9}[source]
Ice Ice Baby getting the bass riff of Under Pressure is sampling. Making a cover is covering. Milli Vanilli is another completely different situation.

I am sorry, none of your points are made. Makes no sense.

The LLM work sounds dumb, and the suggestion that it made "a qr code generator" is disingenuous. The LLM barely did a frontend for it. Barely.

Regarding the "free" price, read the comment I replied on again:

> Built with Aider and either Sonnet 3.5 or Gemini 2.5 Pro

Paid tools.

It sounds like the author payed for `npm install`, and thinks he's on top of things and being smart.

replies(1): >>44390587 #
117. strogonoff ◴[] No.44387145{3}[source]
I have a few sociophilosophical quibbles about the impact of this, but to focus on a practical part:

> by that point anyone else can reproduce your OS for whatever the cost of tokens is without ever touching your code.

Do you think that the cost of tokens will remain low enough once these companies for now operating at loss have to be profitable, and it really is going to be “anyone else”? Or, would it be limited to “big tech” or select few corporations who can pay a non-trivial amount of money to them?

Do you think it would mean they essentially sell GPL’ed code for proprietary use? Would it not affect FOSS, which has been till now partially powered by the promise to contributors that their (often voluntary) work would remain for public benefit?

Do you think someone would create and make public (and gather so much contributor effort) something on the scale Linux, if they knew that it would be open to be scraped by an intermediary who can sell it at whatever price they choose to set to companies that then are free to call it their own and repackage commercially without contributing back, providing their source or crediting the original authors in any way?

replies(2): >>44388023 #>>44391120 #
118. olalonde ◴[] No.44387156[source]
Seems like a fake problem. Who would sue QEMU for using AI-generated code? OpenAI? Anthropic?
replies(1): >>44387220 #
119. latexr ◴[] No.44387171{7}[source]
> To be clear: this isn't an endorsement of using models for serious Open Source libraries. This was an experiment to see how far I could get with minimal manual effort, and to unstick myself from an annoying blocker. The result is good enough for my immediate use case and I also felt good enough to publish it to PyPI in case someone else has the same problem.

By their own admission, this is just kind of OK. They don’t even know how good or bad it is, just that it kind of solved an immediate problem. That’s not how you create sustainable and reliable software. Which is OK, sometimes you just need to crap something out to do a quick job, but that doesn’t really feel like what your parent comment is talking about.

120. devmor ◴[] No.44387203{4}[source]
Both the futures market and resource planning are based on evidential standards (usually). When you make those decisions without any reasoning, you are gambling, and might as well go to the casino.

But notably, FOSS development is neither a corporation or stock trading. It is focused on longevity and maintainability.

121. ethbr1 ◴[] No.44387220[source]
Anyone whose code is in a used model's training set.*

This is about future existential tail risk, not current risk.

* Depending on future court decisions in different jurisdictions

replies(1): >>44387393 #
122. fireflash38 ◴[] No.44387258{8}[source]
Schroedingers AI. It's everywhere, but you can't point to it cause it's apparently indistinguishable from humans, except for the shitty AI which is just shitty AI.

It's a thought terminating cliche.

123. latexr ◴[] No.44387261{7}[source]
> Assuming at least that you are an otherwise competent developer and that you carefully review all code before you commit it.

That is a big assumption. If everyone were doing that, this wouldn’t be a major issue. But as the curl developer has noted, people are using LLMs without thinking and wasting everyone’s time and resources.

https://www.linkedin.com/posts/danielstenberg_hackerone-curl...

I can attest to that. Just the other day I got a bug report, clearly written with the assistance of an LLM, for software which has been stable and used in several places for years. This person, when faced with an error on their first try, instead of pondering “what am I doing wrong” instead opened a bug report with a “fix”. Of course, they were using the software wrong. They did not follow the very short and simple instructions and essentially invented steps (probably suggested by an LLM) that caused the problem.

Waste of time for everyone involved, and one more notch on the road to causing burnout. Some of the worst kind of users are those who think “bug” means “anything which doesn’t immediately behave the way I thought it would”. LLMs empower them, to the detriment of everyone else.

replies(1): >>44389135 #
124. latexr ◴[] No.44387302{12}[source]
And not that long ago, the majority of the population believed the Earth is flat, and that cigarettes are good for your health. Radioactive toys were being sold to children.

Wide belief does not equal truth.

125. olalonde ◴[] No.44387393{3}[source]
Again, seems so implausible that it's not worth worrying about.
replies(2): >>44387785 #>>44388305 #
126. irthomasthomas ◴[] No.44387603{6}[source]
My llm-consortium project was vibe coded. Some notes on how I did that in the announcement tweet if you click through https://x.com/karpathy/status/1870692546969735361
127. bredren ◴[] No.44387628{8}[source]
The parent claim was that devs don’t open-source their personal AI tools. FileKitty is mine and it is MIT-licensed on GitHub.

It began as an experiment in AI-assisted app design and a cross-platform “cat these files” utility.

Since then it has picked up:

- Snapshot history (and change flags) for any file selection

- A rendered folder tree that LLMs can digest, with per-prompt ignore filters

- String-based ignore rules for both tree and file output, so prompts stay surgical

My recent focus is making that generated context modular, so additional inputs (logs, design docs, architecture notes) can plug in cleanly. Apple’s new on-device foundation models could pair nicely with that.

The bigger point: most AI tooling hides the exact nature of context. FileKitty puts that step in the open and keeps the programmer in the loop.

I continue to believe LLMs can solve big problems with appropriate context and that intentionality in context prep is important step in evaluating ideas and implementation suggestions found in LLM outputs.

There's a Homebrew build available and I'd be happy to take contributions: https://github.com/banagale/FileKitty

128. ethbr1 ◴[] No.44387785{4}[source]
Were you around for SCO? https://en.m.wikipedia.org/wiki/Timeline_of_SCO%E2%80%93Linu...

IP disputes aren't trivial, especially for shoestring-funded OSS.

129. Dylan16807 ◴[] No.44387791{10}[source]
You said you wouldn't have made it if it took longer, isn't that a barely?

But either way it's not an example of what they wanted.

130. kylereeve ◴[] No.44387925[source]
> #2 Software projects that somehow are 100% human developed will not be competitive with AI assisted or written projects. The only room for debate on that is an apocalypse level scenario where humans fail to continue producing semiconductors or electricity.

??

"AI" code generators are still mostly overhyped nonsense that generate incorrect code all the time.

131. Pet_Ant ◴[] No.44388023{4}[source]
> Do you think that the cost of tokens will remain low enough once these companies for now operating at loss have to be profitable

New techniques are coming, new hardware processes are being developed, and the incremental unit cost is low. Once they fill up the labs, they'll start selling to consumers till the price becomes the cost of a bucket of sand and the cost to power a light-bulb.

132. ziml77 ◴[] No.44388207{7}[source]
I can't imagine they ever even looked at what they checked in, because it includes code that the LLM was using to investigate other code.
133. tomjen3 ◴[] No.44388220{5}[source]
You are just not listening to the right places.

fly.pieter.com made a fortune while he live vide coded it on Twitter. One made making a modern multiplayer game.

Or Michael Luo, who got a legal notice after making a much cheaper app that did the same as docusign https://analyticsindiamag.com/ai-news-updates/vibe-coder-get...

There are others, but if you have found a gold mine, why would you inform the world?

134. consp ◴[] No.44388305{4}[source]
It is implausible until it isn't and qemu is taking a very cheap and easy step to outright ban it and covering their ass just in case. The threat is low plausibility but high risk and thus a valid one to consider.
replies(1): >>44389882 #
135. fragmede ◴[] No.44388498{7}[source]
> Complex real world software is different from pomodoro timers and TODO lists.

Simplistic Pomodoro timer with no features, sure, but a full blown modern Todo app that syncs to configurable backend(s), has a website, mobile apps, an electron app, CLI/TUI, web hooks, other integrations? Add a login system and allow users to assign todos to each other, and have todos depend on other todos and visualizations and it starts looking like JIRA, which is totally complex real world software.

The weakness of LLMs is that they can't do anything that's not in their training data. But they've got so much training data that say you had a box of Lego bricks but could only use those bricks to build models. If you had a brick copier, and one copy of every single brick type on the Internet, the fact that you couldn't invent new pieces from scratch would be a limitation, but given the number of bricks on all the Internet, that covers a lot of area. Most (but not all) software is some flavor of CRUD app, and if LLMs could only write every CRUD app ever that would still be tremendous value.

136. fragmede ◴[] No.44388568{5}[source]
We'll have to see how it pans out for Cloudflare. They published an oauth thing and all the prompts used to create it.

https://github.com/cloudflare/workers-oauth-provider/

137. luqtas ◴[] No.44388672{5}[source]
go inform yourself [0]

0: https://iopscience.iop.org/article/10.1088/1748-9326/aa7541/...

138. alganet ◴[] No.44388915{12}[source]
Billions of people _say_ they believe in god. It's very different.

--

When you analyze church attendance, it drops to roughly 50% instead of 85% of the population:

https://en.wikipedia.org/wiki/Church_attendance#Demographics

If you start to investigate many aspects of religious belief, like how many christians read the bible, the numbers drop drastically to less than 15%

https://www.statista.com/statistics/299433/bible-readership-...

This demonstrates that we cannot rely on self-reporting to understand religious belief. In practice, most people are closer to atheists than believers.

replies(1): >>44389320 #
139. furyofantares ◴[] No.44389059[source]
Much of that may be true in the (near) future but it also makes sense for people to make decisions that apply right now, and update as the future comes along.
140. fc417fc802 ◴[] No.44389135{8}[source]
Sure I won't disagree that those people also exist but I don't think that's who the claim is being made about. Pointing out that subpar developers exist doesn't refute that good ones exist.
141. fc417fc802 ◴[] No.44389320{13}[source]
That's rather silly. Neither of those things is a requirement for belief.
replies(1): >>44389506 #
142. fc417fc802 ◴[] No.44389395{13}[source]
Obviously it's the basis for a religion. We're to have faith in the ability of LLMs. To ask for evidence of that is to question the divine. You can ask a model itself for the relevant tenants pertaining to any given situation.
143. alganet ◴[] No.44389506{14}[source]
You can believe all you want, but practice is what actually matters.

It's the same thing with AI.

144. a57721 ◴[] No.44389582{8}[source]
My apologies, I didn't want to sound patronizing and wasn't making assumptions about your work and experience based on your examples, I am happy that generative AI allows you to make such apps. However, they are very similar to the demos that are always presented as showcases.
145. olalonde ◴[] No.44389882{5}[source]
I disagree. Open source projects routinely deal with far greater risk, like employees contributing open source code on company time without explicit authorization. Yet they generally allow code from anyone without much verification (some have a contributor agreement but it's based on trust, there's no actual verification). I stand by my 2022 prediction[0]: no one will get sued for using LLM-generated code.

[0] https://news.ycombinator.com/item?id=31849027

146. ben_w ◴[] No.44390587{10}[source]
> The LLM work sounds dumb, and the suggestion that it made "a qr code generator" is disingenuous. The LLM barely did a frontend for it. Barely.

Yes, and?

The goal wasn't "write me a QR library" it was "here's my pain point, solve it".

> It sounds like the author payed for `npm install`, and thinks he's on top of things and being smart.

I can put this another way if you prefer:

  Running `npm install qrious`: trivial.
  Knowing qrious exists and how to integrate it into a page: expensive.
https://www.snopes.com/fact-check/know-where-man/

> > Built with Aider and either Sonnet 3.5 or Gemini 2.5 Pro

> Paid tools.

I get Sonnet 4 for free at https://claude.ai — I know version numbers are weird in this domain, but I kinda expect that means Sonnet 3.5 was free at some point? Was it not? I mean, 3.7 is also a smaller version number but listed as "pro", so IDK…

Also I get Gemini 2.5 Pro for free at https://aistudio.google.com

Just out of curiosity, I've just tried using Gemini 2.5 Pro (for free) myself to try this. The result points to a CDN of qrcodejs, which I assume is this, but don't know my JS libraries so can't confirm this isn't just two different ones with the same name: https://github.com/davidshimjs/qrcodejs

My biggest issue with this kind of thing in coding is the same as my problem with libraries in general: you're responsible for the result even if you don't read what the library (/AI) is doing. So, I expect some future equivalent of the npm left-pad incident — memetic monoculture, lots of things fail at the same time.

replies(1): >>44390745 #
147. alganet ◴[] No.44390745{11}[source]
> Knowing qrious exists and how to integrate it into a page: expensive.

qrious literally has it integrated already:

https://github.com/davidshimjs/qrcodejs/blob/master/index.ht...

I see many issues. The main one is that none of this is relevant to the qemu discussion. It's on another whole level of project.

I kind of regret asking the poor guy to show his stuff. None of these tutorial projects come even close to what an AI contribution to qemu would look like. It's pointless.

replies(2): >>44390893 #>>44391009 #
148. ben_w ◴[] No.44390893{12}[source]
The very first part of the quotation is "Knowing qrious exists".

So the fact they've already got the example is great if you do in fact already have that knowledge, and *completely useless* if you don't.

> I kind of regret asking the poor guy to show his stuff. None of these tutorial projects come even close to what an AI contribution to qemu would look like. It's pointless.

For better and worse, I suspect it's very much the kind of thing AI would contribute.

I also use it for things, and it's… well, I have seen worse code from real humans, but I don't think highly of those humans' coding skills. The AI I've used so far are solidly at the quality level of "decent for a junior developer", not more, not less. Ridiculously broad knowledge (which is why that quality level is even useful), but that quality level.

Use it because it's cheap or free, when that skill level is sufficient. Unless there's a legal issue, which there is for qemu, in which case don't.

149. TeMPOraL ◴[] No.44391009{12}[source]
Person in question here.

I didn't know qrious exist. Last time I checked for frontend-only QR code generators myself, pre-AI, I couldn't find anything useful. I don't do frontend work daily, I'm not on top of the garbagefest the JS environment is.

Probably half the win applying AI to this project was that it a) discovered qrious for me, and b) made me a working example frontend, in less time than it would take me to find the library myself among sea of noise.

'ben_w is absolutely correct when he wrote:

> The goal wasn't "write me a QR library" it was "here's my pain point, solve it".

And:

  <quote>
  Running `npm install qrious`: trivial.
  Knowing qrious exists and how to integrate it into a page: expensive.
  </quote>
This is precisely what it was. I built this in between other stuff, paying half attention to it, to solve an immediate need my wife had. The only thing I cared about it here is that:

1. It worked and was trivial to use

2. Was 100% under my control, to guarantee no tracking, telemetry, ads, crypto miners, and other usual web dangers, are present, and ensure they never are going to be present.

3. It had no build step whatsoever, and minimal dependencies that could be vendored, because again, I don't do webshit for a living and don't have time for figuring out this week's flavor of building "Hello world" in Node land.

(Incidentally, I'm using Claude Code to build something bigger using a web stack, which forced me to figure out the current state of tooling, and believe me, it's not much like what I saw 6 months ago, and nothing like what I saw a year ago.)

2 and 3 basically translate to "I don't want to ever think about it again". Zero ops is my principle :).

----

> I see many issues. The main one is that none of this is relevant to the qemu discussion. It's on another whole level of project.

It was relevant to the topic discussed in this subthread. Specifically about the statement:

> But there are also local tools generated faster than you could adjust existing tools to do what you want. I'm running 3 things now just for myself that I generated from scratch instead of trying to send feature requests to existing apps I can buy.

The implicit point of larger importance is: AI contributions may not show up fully polished in OSS repos, but making it possible to do throwaway tools to address pain points directly provides advantages that compound.

And my examples are just concrete examples of projects that were AI generated with a mindset of "solve this pain point" and not "build a product", and making them took less time and effort than my participation in this discussion already did.

replies(1): >>44391176 #
150. ben_w ◴[] No.44391120{4}[source]
> Do you think that the cost of tokens will remain low enough once these companies for now operating at loss have to be profitable, and it really is going to be “anyone else”? Or, would it be limited to “big tech” or select few corporations who can pay a non-trivial amount of money to them?

When considering current models, it's not in their power to prevent it:

DeepSeek demonstrated big models could be trained very easily for a modest budget, and inference is mostly constrained by memory access rather than compute, so if we had smartphones with a terabyte of RAM with a very high bandwidth to something like a current generation Apple NPU, things like DeepSeek R1 would run locally at (back-of-the-envelope calculation) about real-time — and drain the battery in half an hour if you used that model continuously.

But current models are not good enough, so the real question is: "who will hold what power when such models hypothetically are created?", and I have absolutely no idea.

> Do you think someone would create and make public (and gather so much contributor effort) something on the scale Linux, if they knew that it would be open to be scraped by an intermediary who can sell it at whatever price they choose to set to companies that then are free to call it their own and repackage commercially without contributing back, providing their source or crediting the original authors in any way?

Consider it differently: how much would it cost to use an LLM to reproduce all of Linux?

I previously rough-estimated that at $230/megatoken of (useful final product) output, an AI would be energy-competitive vs. humans consuming calories to live: https://news.ycombinator.com/item?id=44304186

As I don't have specifics, I need to Fermi-estimate this:

I'm not actually sure how big any OS (with or without apps) is, but I hear a lot of numbers in the range of 10-50 million. Let's say 50 Mloc.

I don't know the tokens per line, I'm going to guess 10.

50e6 lines * 10 tokens/line * $230/(1e6 tokens) = $115,000

There's no fundamental reason for $230/megatoken beyond that's when the AI is economically preferable to feeding a human who is doing it for free and you just need to stop them from starving to death, even if you have figured out how to directly metabolise electricity which is much cheaper than food: on the one hand $230, this is on the very expensive end of current models; on the second hand, see previous point about running DeepSeek R1 on phone processor with more RAM and bandwidth to match; on the third hand*, see other previous point that current models just aren't good enough to bother.

So it's current not available at any price, but when the quality is good, even charging a rate that's currently expensive makes all humans unemployable.

* Insert your own joke about about off-by-one-errors

151. alganet ◴[] No.44391176{13}[source]
Cool, makes sense.

Since you're here, I have another question relevant to the thread: do you pay for AI tools or are you using them for free?

replies(1): >>44391441 #
152. TeMPOraL ◴[] No.44391295{9}[source]
To me it's self-evident, but is probably one casual step removed from what you'd like to see. I can't point to specific finished or released projects that were substantially accelerated by use of GenAI[0]. But I can point out that nearly everyone I talked with in the last year, that does any kind of white-collar job, is either afraid of LLMs, actively using LLMs at work and finding them very useful, or both.

It's not possible for this level of impact at the bottom to make no change on the net near the top, so I propose that effects may be delayed and not immediately apparent. LLMs are still a new thing in business timelines.

TL;DR: just wait a bit more.

One thing I can hint at, but can't go into details, is that I personally know of at least one enterprise-grade project whose roadmap and scoping - and therefore, funding - is critically dependent on AI speeding up significant amount of development and devops tasks by at least 2-3x; that aspect is understood by both developers, managers, customers and investors, and not disputed.

So, again: just wait a little longer.

--

[0] - Except maybe for Aider, whose author always posts how much of its own code Aider wrote in a given release; it's usually way above 50%.

replies(1): >>44391640 #
153. TeMPOraL ◴[] No.44391441{14}[source]
TL;DR: I pay, I always try to use SOTA models if I can.

I pay for them; until last week, this was almost entirely[0] pay-as-you-go use of API keys via TypingMind (for chat) and Aider (for coding). The QR code project I linked was made by Aider. Total cost was around $1 IIRC.

API options were, until recently, very cheap. Most of my use was around $2 to $5 per project, sometimes under $2. I mostly worked with GPT-4, then Sonnet 3.5, briefly with Deepseek-R1; by the time I got around to testing Claude Sonnet 3.7, Google released Gemini 2.5 Pro, which was substantially cheaper, so I stuck to the latter.

Last week I got myself the Max plan for Anthropic (first 5x, then the 20x one) specifically for Claude Code, because using pay-as-you-go pricing with top models in the new "agentic" way got stupidly expensive; $100 or $200 per month may sound like a lot, but less so when taking the API route would have you burn this much in a day or two.

--

[0] - I have the $20/month "Plus" subscription to ChatGPT, which I keep because of gpt-4o image generation and o3 being excellent as my default model for random questions/problems, many of them not even coding-related. I could access o3 via API, but this gets stupidly expensive for casual use; subscription is a better deal now.

replies(1): >>44391614 #
154. TeMPOraL ◴[] No.44391512{7}[source]
Did you notice that what companies say to investors and what they say to the public are usually entirely different things? When they get mixed up - especially when investor-bound information reaches general public - it's usually a bad day for the company.
155. ben_w ◴[] No.44391614{15}[source]
> TL;DR: I pay, I always try to use SOTA models if I can.

Interesting; I'm finding myself doing the opposite — I have API access to at least OpenAI, but all the SOTA stuff becomes free so fast that I don't expect to lose much by waiting.

My OpenAI API credit expired mostly unused.

156. ben_w ◴[] No.44391640{10}[source]
> One thing I can hint at, but can't go into details, is that I personally know of at least one enterprise-grade project whose roadmap and scoping - and therefore, funding - is critically dependent on AI speeding up significant amount of development and devops tasks by at least 2-3x; that aspect is understood by both developers, managers, customers and investors, and not disputed.

Mm. I can now see why, in your other comment, you want to keep up with the SOTA.

replies(1): >>44391857 #
157. stronglikedan ◴[] No.44391757[source]
To me, AI doesn't generate code by itself, so there's no difference between the outputted code or code written by the human that prompted it. As well, the humans that prompt it are solely responsible for making sure it is correct, and solely to blame for any negative outcomes of its use, just as if they had written it themselves.
158. TeMPOraL ◴[] No.44391857{11}[source]
It's actually unrelated. I try to keep up with the SOTA because if I'm not using the current-best model, then each time I have a hard time with it or get poor results, I keep wondering if I'm just wasting my time fighting with something a stronger model would do without problems. It's a personal thing; I've been like this ever since I got API access to GPT-4.

My use of LLMs isn't all that big, and I don't have any special early access or anything. It's just that the tokens are so cheap that, for casual personal and professional use, the pricing difference didn't matter. Switching to a stronger model meant that my average monthly bill went from $2 to $10 or something. These amounts were immaterial.

Use patterns and pricing changes, though, and recently this made some SOTA models (notably o3, gpt-4.5 and the most recent Opus model) too expensive for my use.

As for the project I referred to, let's put it this way: the reference point is what was SOTA ~2-3 months ago (Sonnet 3.7, Gemini 2.5 Pro). And the assumptions aren't just wishful thinking - they're based on actual experience with using these models (+ some tools) to speed up specific kind of work.

159. TeMPOraL ◴[] No.44391914{11}[source]
You don't debug AI-generated code - you throw the problematic chunk away and have AI write it again, and if that doesn't help, you repeat the process, possibly with larger chunks.

Okay, not in every case, but in many, and that's where we're headed. The reason is economics - i.e. the same reason approximately no one in the West repairs their clothes or appliances; they just throw the damaged thing away and buy a new one. Human labor is expensive, automated production is cheap - even more so in digital space.

160. TeMPOraL ◴[] No.44391966{7}[source]
I think in context of this discussion you might be confused about what the term "better" refers to.

> And because from the outside everything looks worse than ever. Worse quality, no more support, established companies going crazy to cut costs. AI slop is replacing thoughtful content across the web. Engineering morale is probably at an all time low for my 20 years watching this industry.

That is true and present across the board. But consider, all of that is what "better" means to companies, and most of that is caused by actions that employers call success and reward employees for.

Our industry, in particular, is a stellar example - half of the things we make are making things worse; of the things that seem to make things better, half of them are actually making things worse, but it's not visible because of accounting trickery (e.g. specialized roles cut is legible to beancounters; the workload being diffused and dragging everyone else's productivity down is not).