←back to thread

549 points thecr0w | 2 comments | | HN request time: 0s | source
Show context
thuttinger ◴[] No.46184466[source]
Claude/LLMs in general are still pretty bad at the intricate details of layouts and visual things. There are a lot of problems that are easy to get right for a junior web dev but impossible for an LLM. On the other hand, I was able to write a C program that added gamma color profile support to linux compositors that don't support it (in my case Hyprland) within a few minutes! A - for me - seemingly hard task, which would have taken me at least a day or more if I didn't let Claude write the code. With one prompt Claude generated C code that compiled on first try that:

- Read an .icc file from disk

- parsed the file and extracted the VCGT (video card gamma table)

- wrote the VCGT to the video card for a specified display via amdgpu driver APIs

The only thing I had to fix was the ICC parsing, where it would parse header strings in the wrong byte-order (they are big-endian).

replies(3): >>46184840 #>>46185379 #>>46185476 #
jacquesm ◴[] No.46185379[source]
Claude didn't write that code. Someone else did and Claude took that code without credit to the original author(s), adapted it to your use case and then presented it as its own creation to you and you accepted this. If a human did this we probably would have a word for them.
replies(16): >>46185404 #>>46185408 #>>46185442 #>>46185473 #>>46185478 #>>46185791 #>>46185885 #>>46185911 #>>46186086 #>>46186326 #>>46186420 #>>46186759 #>>46187004 #>>46187058 #>>46187235 #>>46188771 #
bsaul ◴[] No.46185478[source]
That's an interesting hypothesis : that LLM are fundamentally unable to produce original code.

Do you have papers to back this up ? That was also my reaction when i saw some really crazy accurate comments on some vibe coded piece of code, but i couldn't prove it, and thinking about it now i think my intuition was wrong (ie : LLMs do produce original complex code).

replies(7): >>46185592 #>>46185822 #>>46186708 #>>46187030 #>>46187456 #>>46188840 #>>46191020 #
jacquesm ◴[] No.46185592[source]
We can solve that question in an intuitive way: if human input is not what is driving the output then it would be sufficient to present it with a fraction of the current inputs, say everything up to 1970 and have it generate all of the input data from 1970 onwards as output.

If that does not work then the moment you introduce AI you cap their capabilities unless humans continue to create original works to feed the AI. The conclusion - to me, at least - is that these pieces of software regurgitate their inputs, they are effectively whitewashing plagiarism, or, alternatively, their ability to generate new content is capped by some arbitrary limit relative to the inputs.

replies(5): >>46185770 #>>46185916 #>>46185934 #>>46186728 #>>46188343 #
andsoitis ◴[] No.46185934[source]
I like your test. Should we also apply to specific humans?

We all stand on the shoulders of giants and learn by looking at others’ solutions.

replies(2): >>46186146 #>>46186413 #
jacquesm ◴[] No.46186146{3}[source]
That's true. But if we take your implied rebuttal then current level AI would be able to learn from current AI as well as it would learn from humans, just like humans learn from other humans. But so far that does not seem to be the case, in fact, AI companies do everything they can to avoid eating their own tail. They'd love eating their own tail if it was worth it.

To me that's proof positive they know their output is mangled inputs, they need that originality otherwise they will sooner or later drown in nonsense and noise. It's essentially a very complex game of Chinese whispers.

replies(2): >>46186385 #>>46187981 #
1. handoflixue ◴[] No.46187981{4}[source]
Equally, of course, all six year olds need to be trained by other six year olds; we must stop this crutch of using adult teachers
replies(1): >>46190648 #
2. subscribed ◴[] No.46190648[source]
Beautiful, thank you.