Most active commenters
  • ineedasername(3)
  • jacquesm(3)

←back to thread

549 points thecr0w | 15 comments | | HN request time: 0.015s | source | bottom
Show context
thuttinger ◴[] No.46184466[source]
Claude/LLMs in general are still pretty bad at the intricate details of layouts and visual things. There are a lot of problems that are easy to get right for a junior web dev but impossible for an LLM. On the other hand, I was able to write a C program that added gamma color profile support to linux compositors that don't support it (in my case Hyprland) within a few minutes! A - for me - seemingly hard task, which would have taken me at least a day or more if I didn't let Claude write the code. With one prompt Claude generated C code that compiled on first try that:

- Read an .icc file from disk

- parsed the file and extracted the VCGT (video card gamma table)

- wrote the VCGT to the video card for a specified display via amdgpu driver APIs

The only thing I had to fix was the ICC parsing, where it would parse header strings in the wrong byte-order (they are big-endian).

replies(3): >>46184840 #>>46185379 #>>46185476 #
jacquesm ◴[] No.46185379[source]
Claude didn't write that code. Someone else did and Claude took that code without credit to the original author(s), adapted it to your use case and then presented it as its own creation to you and you accepted this. If a human did this we probably would have a word for them.
replies(16): >>46185404 #>>46185408 #>>46185442 #>>46185473 #>>46185478 #>>46185791 #>>46185885 #>>46185911 #>>46186086 #>>46186326 #>>46186420 #>>46186759 #>>46187004 #>>46187058 #>>46187235 #>>46188771 #
mlinsey ◴[] No.46185791[source]
Certainly if a human wrote code that solved this problem, and a second human copied and tweaked it slightly for their use case, we would have a word for them.

Would we use the same word if two different humans wrote code that solved two different problems, but one part of each problem was somewhat analogous to a different aspect of a third human's problem, and the third human took inspiration from those parts of both solutions to create code that solved a third problem?

What if it were ten different humans writing ten different-but-related pieces of code, and an eleventh human piecing them together? What if it were 1,000 different humans?

I think "plagiarism", "inspiration", and just "learning from" fall on some continuous spectrum. There are clear differences when you zoom out, but they are in degree, and it's hard to set a hard boundary. The key is just to make sure we have laws and norms that provide sufficient incentive for new ideas to continue to be created.

replies(6): >>46186125 #>>46186199 #>>46187063 #>>46188272 #>>46189797 #>>46194087 #
1. whatshisface ◴[] No.46186125[source]
They key difference between plagarism and building on someone's work is whether you say, "this based on code by linsey at github.com/socialnorms" or "here, let me write that for you."
replies(2): >>46186302 #>>46187094 #
2. CognitiveLens ◴[] No.46186302[source]
but as mlinsey suggests, what if it's influenced in small, indirect ways by 1000 different people, kind of like the way every 'original' idea from trained professionals is? There's a spectrum, and it's inaccurate to claim that Claude's responses are comparable to adapting one individual's work for another use case - that's not how LLMs operate on open-ended tasks, although they can be instructed to do that and produce reasonable-looking output.

Programmers are not expected to add an addendum to every file listing all the books, articles, and conversations they've had that have influenced the particular code solution. LLMs are trained on far more sources that influence their code suggestions, but it seems like we actually want a higher standard of attribution because they (arguably) are incapable of original thought.

replies(2): >>46186363 #>>46186951 #
3. sarchertech ◴[] No.46186363[source]
If the problem you ask it to solve has only one or a few examples, or if there are many cases of people copy pasting the solution, LLMs can and will produce code that would be called plagiarism if a human did it.
replies(1): >>46186668 #
4. saalweachter ◴[] No.46186951[source]
It's not uncommon, in a well-written code base, to see documentation on different functions or algorithms with where they came from.

This isn't just giving credit; it's valuable documentation.

If you're later looking at this function and find a bug or want to modify it, the original source might not have the bug, might have already fixed it, or might have additional functionality that is useful when you copy it to a third location that wasn't necessary in the first copy.

replies(1): >>46189813 #
5. ineedasername ◴[] No.46187094[source]
Do you have a source for that being the key difference? Where did you learn your words, I don’t see the names of your teachers cited here. The English language has existed a while, why aren’t you giving a citation every time you use a word that already exists in a lexicon somewhere? We have a name for people who don’t coin their own words for everything and rip off the words that other painstakingly evolved over a millennia of history. Find your own graphemes.
replies(1): >>46187201 #
6. latexr ◴[] No.46187201[source]
What a profoundly bad faith argument. We all understand that singular words are public domain, they belong to everyone. Yet when you arrange them in a specific pattern, of which there are infinite possibilities, you create something unique. When someone copies that arrangement wholesale and claims they were the first, that’s what we refer to as plagiarism.

https://www.youtube.com/watch?v=K9huNI5sBd8

replies(3): >>46187434 #>>46188381 #>>46189676 #
7. jacquesm ◴[] No.46187434{3}[source]
This particular user does that all the time. It's really tiresome.
replies(1): >>46188474 #
8. ineedasername ◴[] No.46188381{3}[source]
It’s not bad faith argument. It’s an attempt to shake thinking that is profoundly stuck by taking that thinking to an absurd extreme. Until that’s done, quite a few people aren’t able to see past the assumptions they don’t know they making. And by quite a few people I mean everyone, at different times. A strong appreciation for the absurd will keep a person’s thinking much sharper.
replies(1): >>46191241 #
9. ineedasername ◴[] No.46188474{4}[source]
It’s tiresome to see unexamined assumptions and self-contradictions tossed out by a community that can and often does do much better. Some light absurdism often goes further and makes clear that I’m not just trying to setup a strawman since I’ve already gone and made a parody of my own point.
10. tscherno ◴[] No.46189676{3}[source]
It is possible that the concept of intellectual property could be classified as a mistake of our era by the history teachers of future generations.
replies(1): >>46190024 #
11. jacquesm ◴[] No.46189813{3}[source]
This is why I'm still, even after decades of seeing it fail in the marketplace, a fan of literate programming.
12. latexr ◴[] No.46190024{4}[source]
Intellectual property is a legal concept; plagiarism is ethical. We’re discussing the latter.
13. stOneskull ◴[] No.46191241{4}[source]
>> They key difference between plagarism and building on someone's work is whether you say, "this based on code by linsey at github.com/socialnorms" or "here, let me write that for you."

> [i want to] shake thinking that is profoundly stuck [because they] aren’t able to see past the assumptions they don’t know they making

what is profoundly stuck, and what are the assumptions?

replies(1): >>46192275 #
14. macinjosh ◴[] No.46192275{5}[source]
That your brain training on all the inputs it sees and creating output is fundamentally more legitimate than a computer doing the same thing.
replies(1): >>46194468 #
15. Arelius ◴[] No.46194468{6}[source]
Copyright isn't some axiom, but to quote wikipedia: "Copyright laws allow products of creative human activities, such as literary and artistic production, to be preferentially exploited and thus incentivized."

It's a tool to incentivse human creative expression.

Thus it's entirely sensible to consider and treat the output from computers and humans differently.

Especially when you consider large differences between computers and humans, such as how trivial it is to create perfect duplicates of computer training.