Most active commenters
  • exasperaited(5)
  • johnnyanmac(5)
  • mccoyb(3)
  • danielbln(3)

←back to thread

295 points todsacerdoti | 20 comments | | HN request time: 0s | source | bottom
Show context
mccoyb ◴[] No.45948052[source]
I don’t think open source is going anywhere. It’s posed to get significantly stronger — as the devs which care about it learn how to leverage AI tools to make things that corporate greasemonkeys never had the inspiration to. Low quality code spammers are just marketing themselves for jobs where they can be themselves: soulless and devoid of creative impulse.

That’s the thing: open source is the only place where the true value (or lack of value) of these tools can be established — the only place where one can test mettle against metal in a completely unconstrained way.

Did you ever want to build a compiler (or an equally complex artifact) but got stuck on various details? Try now. It’s going to stand up something half-baked, and as you refine it, you will learn those details — but you’ll also learn that you can productively use AI to reach past the limits of your knowledge, to make what’s beyond a little more palatable.

All the things people say about AI is true to some degree: my take is that some people are rolling the slots to win a CRUD app, and others are trying to use it to do things that they could only imagine before —- and open source tends to be the home of the latter group.

replies(2): >>45948099 #>>45948167 #
1. exasperaited ◴[] No.45948167[source]
> It’s posed to get significantly stronger

It's really not. Every project of any significance is now fending off AI submissions from people who have not the slightest fucking clue about what is involved in working on long-running, difficult projects or how offensive it is to just slather some slop on a bug report and demand it is given scrutiny.

Even at the 10,000 feet view it has wasted people's time because they have to sit down and have a policy discussion about whether to accept AI submissions, which involves people reheating a lot of anecdotal claims about productivity.

Having learned a bit about how to write compilers I know enough to know that I can guarantee you that an AI cannot help you solve the difficult problems that compiler-building tools and existing libraries cannot solve.

It's the same as it is with any topic: the tools exist and they could be improved, but instead we have people shoehorning AI bollocks into everything.

replies(5): >>45948249 #>>45948362 #>>45948421 #>>45952293 #>>45952918 #
2. micromacrofoot ◴[] No.45948249[source]
yeah we are getting lots of "I don't know how to do this and AI gave me this code that doesn't work, can you fix it" or "AI said it can do this" and the feature doesn't exist... some people will even argue and say "but AI said it doesn't take long, why won't you add it"
replies(1): >>45948732 #
3. mccoyb ◴[] No.45948362[source]
Sounds like a lot of FUD to me — if major projects balk at the emergence of new classes of tools, perhaps the management strategy wasn’t resilient in the first place?

Further: sitting down to discuss how your project will adapt to change is never a waste of time, I’m surprised you stated it like that.

In such a setting, you’re working within a trusted party — and for a major project, that likely means extremely competent maintainers and contributors.

I don’t think these people will have any difficulty adapting to the usage of these tools …

replies(2): >>45948695 #>>45952439 #
4. doug_durham ◴[] No.45948421[source]
This isn't an AI issue. It is a care issue. People shouldn't submit PRs to project where they don't care enough to understand the project they are submitting to or the code they are submitting. This has always been a problem, there is nothing new. The thing that is new is more people can get to a point where they can submit regardless of their care or understanding. A lot of people are trying to gild their resume by saying they contributed to a project. Blaming AI is blaming the wrong problem. AI is a a tool like a spreadsheet. Project owners should instead be working ways to filter out careless code more efficiently.
replies(2): >>45948689 #>>45952415 #
5. exasperaited ◴[] No.45948689[source]
This is an AI issue because people, including the developers of AI tools, don't care enough.

The Tragedy Of The Commons is always about this: people want what they want, and they do not care to prevent the tragedy, if they even recognise it.

> Project owners should instead be working ways to filter out careless code more efficiently.

Great. So the industry creates a burden and then forces people to deal with it — I guess it's an opportunity to sell some AI detection tools.

replies(1): >>45952304 #
6. exasperaited ◴[] No.45948695[source]
> Further: sitting down to discuss how your project will adapt to change is never a waste of time, I’m surprised you stated it like that.

It is a waste of time for large-scale volunteer-led projects who now have to deal with tons of shit — when the very topic is "how do we fend off this stuff that we do not want, because our project relies on much deeper knowledge than these submissions ever demonstrate?"

7. exasperaited ◴[] No.45948732[source]
It weaponises incompetence, carelessness and arrogance at every turn.

AI, to me, is a character test: I'm regularly fascinated by finding out who fails it.

For example, in my personal life I have been treated to AI-generated comms from someone that I would never have expected it from. They don't know I know, and they don't know that I think less of them, and I always will.

replies(2): >>45952316 #>>45955497 #
8. theshrike79 ◴[] No.45952293[source]
> Every project of any significance is now fending off AI submissions from people who have not the slightest fucking clue

I'm kinda hoping that Github will provide an Anubis-equivalent for issue submissions by default.

9. danielbln ◴[] No.45952304{3}[source]
We don't need an AI detector, we need a "human vetted" detector.
replies(2): >>45952421 #>>45954468 #
10. danielbln ◴[] No.45952316{3}[source]
I will never judge someone for using AI, but I will absolutely judge anyone for lobbing slop at me. I define slop as low effort, non-vetted, fiest-try-output.
11. johnnyanmac ◴[] No.45952415[source]
That's why I'm not super optimistic. Even pre-AI and tech slump there were talks about how hard it may be to replace the old guard maintaining these open source initiatives. Now...

>Blaming AI is blaming the wrong problem. AI is a a tool like a spreadsheet. Project owners should instead be working ways to filter out careless code more efficiently.

When care leaves, the entire commons starts to fall apart. New talent doesn't come in. Old talent won't put up with it and retire out of the scene. They already have so much work to do, needing to add in non-development work to make better spam filters may very well be the final stray.

Even when the careless leave, it won't bring back the talent lost. Directing the blame onto the sure won't do that.

12. johnnyanmac ◴[] No.45952421{4}[source]
Who's paying the human to vet it? Or will we have volunteers dedicated to being AI detectors instead of developers?
replies(1): >>45952496 #
13. johnnyanmac ◴[] No.45952439[source]
> if major projects balk at the emergence of new classes of tools, perhaps the management strategy wasn’t resilient in the first place?

It's not the tools, it's the quality. No FOSS dev would care where the code came from if it followed the contributor's guidelines and coding style.

This is why it's a spam issue. a bunch of low quality submissions only gum up the time of such developers and slows the entire process down.

>that likely means extremely competent maintainers and contributors.

Your assumption falls apart here, sadly. Dunning-Kruger hits hard here for new contributors powered by LLMs and the maintainers suffer the brunt of the hit.

replies(1): >>45953827 #
14. danielbln ◴[] No.45952496{5}[source]
I don't have those answers. My point was that trying to outright ban any AI is futile and probably overall counter productive, and that we need to find ways to ensure a human hasn't submitted slop. I don't have an answer as to the how.
replies(1): >>45959118 #
15. windward ◴[] No.45952918[source]
>Every project of any significance is now fending off AI submissions

Not anything with a cathedral model.

'open source' is too ambiguous to be useful.

16. mccoyb ◴[] No.45953827{3}[source]
Why not just disallow PRs from non-vetted contributors?

Why not just disallow issues without a vetting process?

Many of these things could be explored -- you're right: it's a spam issue. But we have solutions to spam issues ... filters. LLMs have shown that "praying for the best" with permissive repository settings is not sufficient. We can and will improve our filters, no?

replies(1): >>45960842 #
17. exasperaited ◴[] No.45954468{4}[source]
People arguing against my point here seem to be doing a good job of validating my point.
18. kmijyiyxfbklao ◴[] No.45955497{3}[source]
>They don't know I know, and they don't know that I think less of them, and I always will.

lol, behavior like this is way more destructive to personal relationships than AI ever will be.

19. johnnyanmac ◴[] No.45959118{6}[source]
> trying to outright ban any AI is futile and probably overall counter productive

okay, you can keep thinking that. I'll just reject anything that has a whiff of AI and lacks care. No point campaigning in this admin to regulate anything, so that's off the table for 1-3 years.

20. johnnyanmac ◴[] No.45960842{4}[source]
That's certainly going to be the eventual outcome at this rate, yes. Close off the FOSS and go underground. Contributing will now involve negotiating the politics and vetting oneself instead of the quality of the contributions. Rejective, but it feels like it hurts the spirit of FOSS. For an industry that can already be considered a bit gatekeep-y

You can definitely argue we hit that point a long time ago, but this will exacerbate it.